Next Article in Journal
Digital Twins in the Marine Industry
Next Article in Special Issue
Research on Synthesis of Multi-Layer Intelligent System for Optimal and Safe Control of Marine Autonomous Object
Previous Article in Journal
Evaluation of Hand Washing Procedure Using Vision-Based Frame Level and Spatio-Temporal Level Data Models
Previous Article in Special Issue
Nonlinear Simulation and Performance Characterisation of an Adaptive Model Predictive Control Method for Booster Separation and Re-Entry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Simple Learning Approach for Robust Tracking Control of a Class of Dynamical Systems

by
Mahmut Reyhanoglu
* and
Mohammad Jafari
*
Robotics Engineering Program, Columbus State University, Columbus, GA 31907, USA
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(9), 2026; https://doi.org/10.3390/electronics12092026
Submission received: 5 April 2023 / Revised: 22 April 2023 / Accepted: 25 April 2023 / Published: 27 April 2023

Abstract

:
This paper studies the robust tracking control problem for a class of uncertain nonlinear dynamical systems subject to unknown disturbances. A robust trajectory tracking control law is designed via a simple learning-based control strategy. In the developed design, the cost function based on the desired closed-loop error dynamics is minimized by means of gradient descent technique. A stability proof for the closed-loop nonlinear system is provided based on the pseudo-linear system theory. The learning capability of the developed robust trajectory tracking control law allows the system to mitigate the adverse effects of the uncertainties and disturbances. The numerical simulation results for a planar PPR robot are included to illustrate the effectiveness of the developed control law.

1. Introduction

Several robust control approaches have been proposed to deal with the issues associated with uncertainties, disturbances, and noise [1,2,3,4,5,6,7]. The sliding mode control (SMC) method represents a popular robust control approach [8,9,10,11,12,13]. SMC theory is based on using high controller gains to achieve robust performance despite the disturbances and uncertainties. However, high gains may cause unwanted chattering effects on the system response [14].
The use of learning-based model predictive controllers is an alternative approach to dealing with disturbances and uncertainties [15,16,17,18,19]. Neural network (NN)-based learning is a common control technique with various types of applications, including NN-based online learning [20], adaptive NN-based stabilization [21], and adaptive NN-based backstepping control [22,23]. The main concern in applying NN-based learning is the inherent difficulty in proving closed-loop system stability [24].
Another control approach combines a fuzzy neural network (FNN) and conventional PD controller to realize the so-called PD+FNN control [25,26,27]. This novel control approach employs a sliding mode control (SMC) based leaning technique that allows for robust control. Their approach does not rely on partial derivatives or matrix inversions. The work in [28] presents a robust adaptive learning control for a spacecraft in the presence of unknown disturbances and uncertainties. The asymptotic stability of the closed-loop systems with the proposed adaptive control law is proven based on the Lyapunov theory.
Type-2 neuro-fuzzy controllers (T2NFCs) have also found wide use, where type-2 fuzzy logic and artificial neural network controllers are fused together to achieve better performance [29,30,31,32]. The suggested online adaptive rules in T2NFCs help to lessen the need for detailed gain tuning by learning the disturbances and the system dynamics in an online fashion. The sliding mode learning control (SMLC) approach for controlling uncertain dynamical systems is addressed in [9]. In this work, a T2NFC learns the system behavior while a conventional control term provides the system stability. Then the T2NFC fully takes over the control of the system rapidly. This SMLC approach is shown to be capable of learning the system behavior even without the knowledge of any mathematical model so that the closed-loop system achieves robust control performance. The neuro-adaptive SMC approach has also been effectively employed in the control of robotic systems [33,34,35].
The feedback linearization control (FLC) technique is another widely used approach. However, the performance of the traditional FLC approach degrades under uncertainties and disturbances such that the closed-loop error fails to converge to zero [36,37]. In the literature, FLC with an integral action (FLC-I) has been proposed to ensure the convergence of the steady-state error. To overcome the drawbacks of the traditional FLC, several learning approaches have been proposed in conjunction with FLC [38,39,40]. In [40], an artificial NN with a nonlinear auto-regressive-moving average model has been designed to facilitate learning of the feedback linearized inputs for a nonlinear plant. In [41], a Gaussian process methodology is employed to determine a model for the FLC technique with minimal knowledge of the system a priori. In addition, in the literature, a dynamic linearization strategy has been designed for discrete-time nonlinear systems [42,43,44,45,46]. For systems that undergo varying working conditions such as time-varying uncertainty and delay [47,48], the aforementioned methods may not be good candidates. One approach to deal with this drawback is to incorporate disturbance observers for disturbance compensation [49,50,51,52,53,54,55]. For example, Ref. [49] proposes a gradient-descent-based learning gain as well as a disturbance observer of nonlinear systems.
Although all these learning-based and nonlinear methods have several benefits, such as handling uncertainties, unmodeled dynamics, and disturbances, the real-time implementations are challenging due to the fact that they are mostly computationally expensive. To address this, in this paper, we generalize the fundamental steps presented in our previous work [56,57] for the design of a simple learning (SL) control strategy for the trajectory tracking of a class of dynamical systems. The developed SL strategy utilizes the updates rules based on the desired closed-loop error dynamics to adjust its control gains and the disturbance estimates in the feedback control law. Here, the FLC method is used for designing the traditional feedforward control law based on the nominal model of the system. In the developed strategy, the gradient-descent method is used to minimize the desired closed-loop error function, therefore finding the adaptation rules for control gains and disturbance estimates. The stability of the closed-loop nonlinear system is mathematically proven based on the pseudo-linear system theory.
The main contributions of this paper are (a) the development of a general computationally efficient simple learning-based robust tracking control law and (b) the illustration of the effectiveness of the designed control law via computer simulation results using the planar PPR robot manipulator. Additionally, It is worth noting that the developed algorithm is computationally efficient for real-time implementation as presented in our previous work [56,57].
The paper is organized as follows: The mathematical model of a class of dynamical systems studied in this paper is presented in Section 2. In Section 3, an FLC-based control law is designed. In Section 3.1, the update rules for controller gains and disturbance estimates are derived. Proof of the closed-loop stability based on the pseudo-linear system theory is provided in Section 3.2. The satisfactory performance of the proposed method is illustrated by providing numerical simulations of the PPR robot manipulator in Section 4, and finally, Section 5 presents the conclusions and future research work.

2. Mathematical Model

Let ( Q , L ) be a mechanical system with n degrees of freedom. Here Q R n is the configuration manifold (assumed to be a smooth manifold) and the smooth real-valued function L = L ( q , q ˙ ) is the Lagrangian, where q Q and q ˙ R n are referred to as n-vectors of generalized configuration variables and generalized velocity variables, respectively. In general, L can be expressed as
L = 1 2 q ˙ T M ( q ) q ˙ U ( q )
where 1 2 q ˙ T M ( q ) q ˙ is the kinetic energy of the system and the smooth real-valued function U ( q ) denotes the potential energy of the system. Here the n × n smooth matrix function M ( q ) (called the generalized inertia matrix) is assumed to be symmetric and positive definite. Let τ R n denote the n-vector of generalized forces/torques acting on the system. Then, the Euler–Lagrange equations [58] are given by
d d t L q ˙ L q = τ ,
which can be expressed as Equation (3) considering all the model uncertainties and disturbances:
M ( q ) q ¨ + F ( q , q ˙ ) = B ( q ) u + Δ ( q , q ˙ ) + w ( t )
Here q ¨ R n is referred to as an n-vector of generalized acceleration variables, u R n is an n-vector of control input variables, B ( q ) is an invertible n × n smooth matrix function, and F ( q , q ˙ ) is an n-vector smooth function; in addition, Δ ( q , q ˙ ) R n and w ( t ) R n represent n-vectors of modeling uncertainties and external disturbances, respectively.
Define by x = [ x 1 T , x 2 T ] T = [ q T , q ˙ T ] T the state. Then Equation (3) can be expressed in state space form as
x ˙ 1 = x 2
x ˙ 2 = f ( x ) + g ( x 1 ) u + d
where
f ( x ) = M 1 ( x 1 ) F ( x ) , g ( x 1 ) = M 1 ( x 1 ) B ( x 1 ) , d = M 1 Δ ( x ) + w
where d R n represents a lumped disturbance vector that comprises modeling uncertainties and external disturbances. Here, the control design goal is to design a control input u R n so that the system tracks a given reference trajectory r ( t ) = [ r 1 T ( t ) , r 2 T ( t ) ] T , where r 2 = r ˙ 1 , accurately while the state and control variables remain bounded.

3. Control Design

To formulate the tracking problem for the given reference trajectory, define the tracking error vector e ( t ) = [ e 1 T ( t ) , e 2 T ( t ) ] T , where e 1 = r 1 x 1 and e 2 = r 2 x 2 . Consider the following control input
u = g 1 ( x 1 ) f + K 1 e 1 + K 2 e 2 + r ˙ 2 d ^
where the n × n gain matrices K 1 and K 2 are symmetric and positive definite and d ^ R n represents the estimated disturbance vector. In what follows, for the simplicity of the development in this paper, we will assume K 1 = k 1 I n , K 2 = k 2 I n , where k 1 R and k 2 R are positive control gains and I n denotes the n × n identity matrix, so that Equation (7) can be rewritten as
u = g 1 ( x 1 ) f + k 1 e 1 + k 2 e 2 + r ˙ 2 d ^
The closed-loop error dynamics can then be expressed as
e ˙ 1 = e 2
e ˙ 2 = k 1 e 1 k 2 e 2 d + d ^

3.1. Update Rules

Let k d = [ k 1 d , k 2 d ] T denote the desired gain vector. Consider the following desired closed-loop error dynamics:
c ( e , k d ) = e ˙ 2 + k 2 d e 2 + k 1 d e 1
The update rules are derived by using a gradient descent method to minimize the cost function and given by
C = 1 2 c T ( e , k d ) c ( e , k d )
The update rule for the controller gain is given by
k ˙ i = α i C k i = α i c T ( e , k d ) c k i = α i c T ( e , k d ) e i
where the ith controller gain’s learning rate is denoted by α i > 0 . Similarly, the disturbance estimate can be updated using the rule below:
d ^ ˙ = α d ^ C d ^ = α d ^ c T ( e , k d ) c d ^ = α d ^ c ( e , k d )
where the learning rate α d ^ is for the disturbance estimate d ^ . The disturbance estimates and the controller gains are updated until the cost function converges to zero (i.e., C ( e , k d ) 0 ).

3.2. Stability Proof

The closed-loop error dynamics of the system under study can be rewritten as
e ¨ 1 + k 2 e ˙ 1 + k 1 e 1 d ^ + d = 0
and assuming disturbance d has a much smaller average rate of change than that of the error state variables, then taking the time derivative of (15) and having d ˙ = 0 , we obtain the following:
e 1 + k 2 e ¨ 1 + ( k 1 + k ˙ 2 ) e ˙ 1 + k ˙ 1 e 1 d ^ ˙ = 0 .
Substituting the associated expressions for k ˙ i and d ^ ˙ in (16), we obtain
e 1 + k 2 e ¨ 1 + k 1 e ˙ 1 + ( α 1 c T e 1 ) e 1 + ( α 2 c T e ˙ 1 ) e ˙ 1 + α d ^ c = 0 .
Using the expression for c ( e , k d ) given by Equation (11), we obtain
e 1 + A 1 ( ξ ) e ¨ 1 + A 2 ( ξ ) e ˙ 1 + A 3 ( ξ ) e 1 = 0
where ξ = [ e 1 T e ˙ 1 T e ¨ 1 T ] T is the state and
A 1 = k 2 + α d ^ I n + α 1 e 1 e 1 T + α 2 e ˙ 1 e ˙ 1 T A 2 = k 1 + α d ^ k 2 d I n + k 2 d α 1 e 1 e 1 T + α 2 e ˙ 1 e ˙ 1 T A 3 = k 1 d α d ^ I n + α 1 e 1 e 1 T + α 2 e ˙ 1 e ˙ 1 T .
Equation (17) is in a pseudo-linear form. A stability analysis will be performed using a Routh–Hurwitz criterion [59]. Clearly ξ , the A i ( ξ ) , i = 1 , 2 , 3 are symmetric and positive definite and the characteristic equation corresponding to A i ( 0 ) , i = 1 , 2 , 3 , is given by
s 3 + k 2 + α d ^ s 2 + k 1 + α d ^ k 2 d s + k 1 d α d ^ I n = 0 .
The stability of the closed-loop error system can be shown by applying the Routh–Hurwitz criterion such that the characteristic Equation (20) has all its roots with negative real parts if the following condition is satisfied:
k 2 + α d ^ k 1 + α d ^ k 2 d > k 1 d α d ^ .
In what follows, k i , k i d , i = 1 , 2 , and α d ^ are chosen such that the stability condition in Equation (21) is satisfied.

4. Example: Simple Learning Control of a PPR Robot

As a sample nonlinear system, a planar PPR robot with three joints (two prismatic and one revolute) is considered as shown in Figure 1. The planar PPR robot is moving on a horizontal plane so that the gravitational potential can be ignored. In Figure 1, the Cartesian position of the revolute joint and the orientation of the third link are denoted by ( x , y ) and θ , respectively. The control inputs are the torque τ that is actuating the revolute joint and the forces F x and F y applied to the two prismatic joints. The physical parameters for the PPR robot are the mass of the three links m 1 , m 2 , m 3 ; the distance l from the revolute joint to the center of mass (CoM) of the third link; and the moment of inertia J 3 of the third link about its CoM.
The Lagrangian is given by
L = 1 2 ( m 1 + m 2 ) x ˙ 2 + 1 2 m 2 y ˙ 2 + 1 2 m 3 v 3 2 + 1 2 J 3 θ ˙ 2
where v 3 is the velocity of the CoM of the third link, which is given by
v 3 = ( x ˙ l θ ˙ sin θ ) 2 + ( y ˙ + l θ ˙ cos θ ) 2 .
The equations of motion can be obtained as below by using Lagrangian formulation.
m x x ¨ m 3 l θ ¨ sin θ m 3 l θ ˙ 2 cos θ = F x m y y ¨ + m 3 l θ ¨ cos θ m 3 l θ ˙ 2 sin θ = F y J θ ¨ + m 3 l ( y ¨ cos θ x ¨ sin θ ) = τ
where m x = m 1 + m 2 + m 3 , m y = m 2 + m 3 and J = J 3 + m 3 l 2 .

4.1. Computer Simulation Results

To evaluate the performance of the developed control law presented in Section 2, the PPR robot system is considered in this section. The nominal and estimated parameters of the PPR robot system are given in Table 1 and Table 2, respectively.
Three numerical simulation cases were considered for evaluating the performance of the developed simple learning-based controller.
  • Case I: Tracking sinusoidal trajectories;
  • Case II: Tracking constant trajectories;
  • Case III: Tracking under large external disturbances.
In Case I, the developed controller generates the appropriate control actions for tracking sinusoidal trajectories. Then, in Case II, the control actions for maintaining the system behavior for properly tracking constant trajectories is studied. Finally, in Case III, the performance of the developed controller for trajectory tracking under large external disturbances is evaluated.
Simulations are carried out with the following initial conditions:
( x 0 , y 0 , θ 0 ) = ( 0 , 0 , 0 ) ( x ˙ 0 , y ˙ 0 , θ ˙ 0 ) = ( 0 , 0 , 0 ) .
In all simulation cases, the following external disturbances are assumed:
d x = d y = 0.5 m / s 2 , d θ = 0.5 rad / s 2
All the simulations using the developed control algorithm are implemented on a platform with the following specifications: MacBook Pro (macOS 13.2.1), Processor: 2.3 GHz Intel Core i5, Memory: 16.00 GB. In all cases, the total simulation time is 70 seconds and the sampling time is 0.05 s. In this work, we employed the Runge–Kutta 4th order method for numerically solving the equations of the system.

4.2. Case I: Tracking Sinusoidal Trajectories

To study the tracking of sinusoidal trajectories, the desired trajectory for PPR robot is designed as follows:
x d ( t ) = cos t [ m ] y d ( t ) = sin t 4 [ m ] θ d ( t ) = 45 cos t 2 [ d e g ] .
For the simulation in Case I, the control gains below are used for the desired closed-loop error dynamics:
k 1 d = 1 , k 2 d = 2 .
All the learning rates and the initial values for the controller in Case I are given below:
k 1 ( 0 ) = 1 , k 2 ( 0 ) = 1 α 1 = 10.5 , α 2 = 0.2 .
All the learning rates and the initial value for the disturbances in Case I are as follows:
d x ( 0 ) = 0 , d y ( 0 ) = 0 , d θ ( 0 ) = 0 , α d ^ = 10.5 .
Figure 2 illustrates that the configuration variables x , y , and θ closely follow the reference trajectory.
The control gains k 1 and k 2 (see Figure 3), the estimated disturbances ( d ^ x , d ^ y , and d ^ θ ) (see Figure 4), and the control input forces ( F x , F y ) and torque F θ (see Figure 5) for Case I (Tracking Sinusoidal Trajectories) are shown in Figure 3, Figure 4 and Figure 5.

4.3. Case II: Tracking Constant Trajectories

To study the tracking of constant trajectories, the desired trajectory for PPR robot is designed as follows:
x d = 1 [ m ] , y d = 2 [ m ] , θ d ( t ) = 90 [ d e g ] .
For the simulation in Case II, the below control gains are used for the desired closed-loop error dynamics:
k 1 d = 1 , k 2 d = 1.2 .
All the learning rates and the initial values for the controller in Case II are given below:
k 1 ( 0 ) = 1 , k 2 ( 0 ) = 1 α 1 = 1.5 , α 2 = 0.2 .
All the learning rates and the initial value for the disturbances in Case II are as follows:
d x ( 0 ) = 0 , d y ( 0 ) = 0 , d θ ( 0 ) = 0 , α d ^ = 1.5 .
Figure 6 illustrates that the configuration variables x , y , and θ closely follow the reference trajectory.
The control gains k 1 and k 2 (see Figure 7), the estimated disturbances ( d ^ x , d ^ y , and d ^ θ ) (see Figure 8), and the control input forces ( F x , F y ) and torque F θ (see Figure 9) for Case II are shown in Figure 7, Figure 8 and Figure 9.

4.4. Case III: Trajectory Tracking under Large External Disturbances

To study the tracking performance of the developed control technique under large external disturbances, an extra disturbance of 2 m/s 2 is additionally applied (i.e., d x = [ 0.5 + 2 ] m / s 2 ) in the range of [30–31] seconds.
In this simulation, the desired trajectory for the PPR robot is designed as follows:
x d = 1 [ m ] , y d = 2 [ m ] , θ d ( t ) = 90 [ d e g ] .
For the simulation in Case III, the below control gains are used for the desired closed-loop error dynamics:
k 1 d = 1 , k 2 d = 1.2 .
All the learning rates and the initial values for the controller in Case III are given below:
k 1 ( 0 ) = 1 , k 2 ( 0 ) = 1 α 1 = 1.5 , α 2 = 0.2 .
All the learning rates and the initial value for the disturbances in Case III are as follows:
d x ( 0 ) = 0 , d y ( 0 ) = 0 , d θ ( 0 ) = 0 , α d ^ = 1.5 .
Figure 10 illustrates that the configuration variables x , y , and θ closely follow the reference trajectory.
The control gains k 1 and k 2 (see Figure 11), the estimated disturbances ( d ^ x , d ^ y , and d ^ θ ) (see Figure 12), and the control input forces ( F x , F y ) and torque F θ (see Figure 13) for Case III are shown in Figure 11, Figure 12 and Figure 13.

5. Conclusions

A robust trajectory tracking control strategy for a class of uncertain dynamical systems is presented in this paper. The external disturbances and model uncertainties are compensated via the developed simple learning-based control technique. The developed control design minimizes the cost function associated with the error dynamics of a class of nonlinear systems. The significance of the developed robust trajectory tracking control method is demonstrated via the results of numerical computer simulations of a PPR robot.
In this paper, the results are obtained for fully actuated systems with invertible control input matrix B ( q ) . Future work includes extending these results to underactuated systems, i.e., systems with fewer inputs than their degrees of freedom, where the control input matrix B ( q ) is not invertible [60].

Author Contributions

M.R. performed the analysis and wrote the original draft. M.J. obtained the computational results and contributed to the editing of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to acknowledge the support provided by Columbus State University, USA.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Izaguirre-Espinosa, C.; Muñoz-Vázquez, A.J.; Sanchez-Orta, A.; Parra-Vega, V.; Castillo, P. Contact force tracking of quadrotors based on robust attitude control. Control. Eng. Pract. 2018, 78, 89–96. [Google Scholar] [CrossRef]
  2. Mammarella, M.; Capello, E.; Park, H.; Guglieri, G.; Romano, M. Tube-based robust model predictive control for spacecraft proximity operations in the presence of persistent disturbance. Aerosp. Sci. Technol. 2018, 77, 585–594. [Google Scholar] [CrossRef]
  3. Ambati, P.R.; Padhi, R. Robust auto-landing of fixed-wing UAVs using neuro-adaptive design. Control Eng. Pract. 2017, 60, 218–232. [Google Scholar] [CrossRef]
  4. Muniraj, D.; C, P.M.; Guthrie, K.T.; Farhood, M. Path-following control of small fixed-wing unmanned aircraft systems with H type performance. Control Eng. Pract. 2017, 67, 76–91. [Google Scholar] [CrossRef]
  5. Zhao, B.; Xian, B.; Zhang, Y.; Zhang, X. Nonlinear robust adaptive tracking control of a quadrotor UAV via immersion and invariance methodology. IEEE Trans. Ind. Electron. 2015, 62, 2891–2902. [Google Scholar] [CrossRef]
  6. Mystkowski, A. Implementation and investigation of a robust control algorithm for an unmanned micro-aerial vehicle. Robot. Auton. Syst. 2014, 62, 1187–1196. [Google Scholar] [CrossRef]
  7. Liu, H.; Bai, Y.; Lu, G.; Zhong, Y. Robust attitude control of uncertain quadrotors. IET Control Theory Appl. 2013, 7, 1583–1589. [Google Scholar] [CrossRef]
  8. Roy, S.; Baldi, S.; Fridman, L.M. On adaptive sliding mode control without a priori bounded uncertainty. Automatica 2020, 111, 108650. [Google Scholar] [CrossRef]
  9. Kayacan, E. Sliding mode learning control of uncertain nonlinear systems with Lyapunov stability analysis. Trans. Inst. Meas. Control 2019, 41, 1750–1760. [Google Scholar] [CrossRef]
  10. Liu, Z.; Liu, X.; Chen, J.; Fang, C. Altitude control for variable load quadrotor via learning rate based robust sliding mode controller. IEEE Access 2019, 7, 9736–9744. [Google Scholar] [CrossRef]
  11. Babaei, A.R.; Malekzadeh, M.; Madhkhan, D. Adaptive super-twisting sliding mode control of 6-DOF nonlinear and uncertain air vehicle. Aerosp. Sci. Technol. 2019, 84, 361–374. [Google Scholar] [CrossRef]
  12. Liu, L.; Zhu, J.; Tang, G.; Bao, W. Diving guidance via feedback linearization and sliding mode control. Aerosp. Sci. Technol. 2015, 41, 16–23. [Google Scholar] [CrossRef]
  13. Ramirez-Rodriguez, H.; Parra-Vega, V.; Sanchez-Orta, A.; Garcia-Salazar, O. Robust backstepping control based on integral sliding modes for tracking of quadrotors. J. Intell. Robot. Syst. 2014, 73, 51–66. [Google Scholar] [CrossRef]
  14. Kayacan, E. Sliding mode control for systems with mismatched time-varying uncertainties via a self-learning disturbance observer. Trans. Inst. Meas. Control 2019, 41, 2039–2052. [Google Scholar] [CrossRef]
  15. Kayacan, E.; Saeys, W.; Ramon, H.; Belta, C.; Peschel, J. Experimental validation of linear and nonlinear MPC on an articulated unmanned ground vehicle. IEEE/ASME Trans. Mechatronics 2018, 23, 2023–2030. [Google Scholar] [CrossRef]
  16. Ostafew, C.J.; Schoellig, A.P.; Barfoot, T.D. Robust constrained learning-based NMPC enabling reliable mobile robot path tracking. Int. J. Robot. Res. 2016, 35, 1547–1563. [Google Scholar] [CrossRef]
  17. Kayacan, E.; Kayacan, E.; Ramon, H.; Saeys, W. Learning in centralized nonlinear model predictive control: Application to an autonomous tractor-trailer system. IEEE Trans. Control Syst. Technol. 2015, 23, 197–205. [Google Scholar] [CrossRef]
  18. Mehndiratta, M.; Kayacan, E.; Patel, S.; Kayacan, E.; Chowdhary, G. Learning-based fast nonlinear model predictive control for custom-made 3D printed ground and aerial robots. In Handbook of Model Predictive Control; Birkhäuser: Cham, Switzerland, 2019; pp. 581–605. [Google Scholar]
  19. Kocer, B.B.; Tjahjowidodo, T.; Seet, G.G.L. Centralized predictive ceiling interaction control of quadrotor VTOL UAV. Aerosp. Sci. Technol. 2018, 76, 455–465. [Google Scholar] [CrossRef]
  20. Dierks, T.; Jagannathan, S. Output feedback control of a quadrotor UAV using neural networks. IEEE Trans. Neural Netw. 2010, 21, 50–66. [Google Scholar] [CrossRef]
  21. Nicol, C.; Macnab, C.J.B.; Ramirez-Serrano, A. Robust neural network control of a quadrotor helicopter. In Proceedings of the 2008 Canadian Conference on Electrical and Computer Engineering, Niagara Falls, ON, Canada, 4–7 May 2008; pp. 1233–1238. [Google Scholar]
  22. Fu, C.; Hong, W.; Zhang, L.; Guo, X.; Tian, Y. Adaptive robust backstepping attitude control for a multi-rotor unmanned aerial vehicle with time-varying output constraints. Aerosp. Sci. Technol. 2018, 78, 593–603. [Google Scholar] [CrossRef]
  23. Wu, B.; Wu, J.; He, W.; Tang, G.; Zhao, Z. Adaptive neural control for an uncertain 2-DOF helicopter system with unknown control direction and actuator faults. Mathematics 2022, 10, 4342. [Google Scholar] [CrossRef]
  24. Jafari, M.; Marquez, G.; Selberg, J.; Jia, M.; Dechiraju, H.; Pansodtee, P.; Teodorescu, M.; Rolandi, M.; Gomez, M. Feedback control of bioelectronic devices using machine learning. IEEE Control Syst. Lett. 2020, 5, 1133–1138. [Google Scholar] [CrossRef]
  25. Zhu, G.; Wu, X.; Yan, Q.; Cai, J. Robust learning control for tank gun control servo systems under alignment condition. IEEE Access 2019, 7, 145524–145531. [Google Scholar] [CrossRef]
  26. Xu, Z.; Li, W.; Wang, Y. Robust learning control for shipborne manipulator with fuzzy neural network. Front. Neurorobotics 2019, 13, 11. [Google Scholar] [CrossRef] [PubMed]
  27. Kayacan, E.; Khanesar, M.A.; Rubio-Hervas, J.; Reyhanoglu, M. Learning control of fixed-wing unmanned aerial vehicles using fuzzy neural networks. Int. J. Aerosp. Eng. 2017, 2017, 5402809. [Google Scholar] [CrossRef]
  28. Wu, S.; Wen, S.; Liu, Y.; Zhang, K. Robust adaptive learning control for spacecraft autonomous proximity maneuver. Int. J. Pattern Recognit. Artif. Intell. 2017, 31, 1759007. [Google Scholar] [CrossRef]
  29. Imanberdiyev, N.; Kayacan, E. A fast learning control strategy for unmanned aerial manipulators. J. Intell. Robot. Syst. 2019, 94, 805–824. [Google Scholar] [CrossRef]
  30. Kayacan, E.; Kayacan, E.; Khanesar, M.A. Identification of nonlinear dynamic systems using Type-2 fuzzy neural networks—A novel learning algorithm and a comparative study. IEEE Trans. Ind. Electron. 2015, 62, 1716–1724. [Google Scholar] [CrossRef]
  31. Khanesar, M.A.; Kayacan, E.; Reyhanoglu, M.; Kaynak, O. Feedback error learning control of magnetic satellites using type-2 fuzzy neural networks with elliptic membership functions. IEEE Trans. Cybern. 2015, 45, 858–868. [Google Scholar] [CrossRef]
  32. Han, H.G.; Yang, F.F.; Yang, H.Y.; Wu, X.L. Type-2 fuzzy broad learning controller for wastewater treatment process. Neurocomputing 2021, 459, 188–200. [Google Scholar] [CrossRef]
  33. Kayacan, E.; Kayacan, E.; Ramon, H.; Saeys, W. Adaptive neuro-fuzzy control of a spherical rolling robot using sliding-mode-control-theory-based online learning algorithm. IEEE Trans. Cybern. 2013, 43, 170–179. [Google Scholar] [CrossRef] [PubMed]
  34. Rossomando, F.G.; Soria, C.; Carelli, R. Sliding mode neuro adaptive control in trajectory tracking for mobile robots. J. Intell. Robot. Syst. 2014, 74, 931–944. [Google Scholar] [CrossRef]
  35. Topalov, A.V.; Kaynak, O. Online learning in adaptive neurocontrol schemes with a sliding mode algorithm. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2001, 31, 445–450. [Google Scholar] [CrossRef] [PubMed]
  36. Kayacan, E.; Fossen, T.I. Feedback linearization control for systems with mismatched uncertainties via disturbance observers. Asian J. Control 2019, 21, 1064–1076. [Google Scholar] [CrossRef]
  37. de Jesús Rubio, J. Robust feedback linearization for nonlinear processes control. ISA Trans. 2018, 74, 155–164. [Google Scholar] [CrossRef]
  38. Jafari, M.; Xu, H.; Garcia Carrillo, L.R. A neurobiologically-inspired intelligent trajectory tracking control for unmanned aircraft systems with uncertain system dynamics and disturbance. Trans. Inst. Meas. Control 2019, 41, 417–432. [Google Scholar] [CrossRef]
  39. Jafari, M.; Xu, H. Intelligent control for unmanned aerial systems with system uncertainties and disturbances using artificial neural network. Drones 2018, 2, 30. [Google Scholar] [CrossRef]
  40. Şahin, S. Learning feedback linearization using artificial neural networks. Neural Process. Lett. 2016, 44, 625–637. [Google Scholar] [CrossRef]
  41. Umlauft, J.; Beckers, T.; Kimmel, M.; Hirche, S. Feedback linearization using Gaussian processes. In Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, Australia, 12–15 December 2017; pp. 5249–5255. [Google Scholar]
  42. Hou, Z.; Jin, S. Model Free Adaptive Control: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  43. Chi, R.; Hou, Z.; Huang, B.; Jin, S. A unified data-driven design framework of optimality-based generalized iterative learning control. Comput. Chem. Eng. 2015, 77, 10–23. [Google Scholar] [CrossRef]
  44. Zhu, Y.; Hou, Z. Data-driven MFAC for a class of discrete-time nonlinear systems with RBFNN. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1013–1020. [Google Scholar] [CrossRef]
  45. Hou, Z.; Zhu, Y. Controller-dynamic-linearization-based model free adaptive control for discrete-time nonlinear systems. IEEE Trans. Ind. Inform. 2013, 9, 2301–2309. [Google Scholar] [CrossRef]
  46. Hou, Z.; Chi, R.; Gao, H. An overview of dynamic-linearization-based data-driven control and applications. IEEE Trans. Ind. Electron. 2017, 64, 4076–4090. [Google Scholar] [CrossRef]
  47. Liu, H.; Zhao, W.; Zuo, Z.; Zhong, Y. Robust control for quadrotors with multiple time-varying uncertainties and delays. IEEE Trans. Ind. Electron. 2017, 64, 1303–1312. [Google Scholar] [CrossRef]
  48. Meda-Campaña, J.A. On the estimation and control of nonlinear systems with parametric uncertainties and noisy outputs. IEEE Access 2018, 6, 31968–31973. [Google Scholar] [CrossRef]
  49. You, S.; Son, Y.S.; Gui, Y.; Kim, W. Gradient-descent-based learning gain for backstepping controller and disturbance observer of nonlinear systems. IEEE Access 2023, 11, 2743–2753. [Google Scholar] [CrossRef]
  50. Chen, W.H.; Yang, J.; Guo, L.; Li, S. Disturbance-observer-based control and related methods: An overview. IEEE Trans. Ind. Electron. 2016, 63, 1083–1095. [Google Scholar] [CrossRef]
  51. Huang, J.; Ri, S.; Liu, L.; Wang, Y.; Kim, J.; Pak, G. Nonlinear disturbance observer-based dynamic surface control of mobile wheeled inverted pendulum. IEEE Trans. Control. Syst. Technol. 2015, 23, 2400–2407. [Google Scholar] [CrossRef]
  52. Kayacan, E.; Peschel, J.M.; Chowdhary, G. A self-learning disturbance observer for nonlinear systems in feedback-error learning scheme. Eng. Appl. Artif. Intell. 2017, 62, 276–285. [Google Scholar] [CrossRef]
  53. Chen, W.H. Disturbance observer based control for nonlinear systems. IEEE/ASME Trans. Mechatron. 2004, 9, 706–710. [Google Scholar] [CrossRef]
  54. Dutta, L.; Das, D.K. Nonlinear disturbance observer based adaptive explicit nonlinear model predictive control design for a class of nonlinear MIMO system. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 1965–1979. [Google Scholar] [CrossRef]
  55. Ren, B.; Zhong, Q.C.; Dai, J. Asymptotic reference tracking and disturbance rejection of UDE-based robust control. IEEE Trans. Ind. Electron. 2017, 64, 3166–3176. [Google Scholar] [CrossRef]
  56. Mehndiratta, M.; Kayacan, E.; Reyhanoglu, M.; Kayacan, E. Robust tracking control of aerial robots via a simple learning strategy-based feedback linearization. IEEE Access 2019, 8, 1653–1669. [Google Scholar] [CrossRef]
  57. Reyhanoglu, M.; Jafari, M.; Rehan, M. Simple learning-based robust trajectory tracking control of a 2-DOF helicopter system. Electronics 2022, 11, 2075. [Google Scholar] [CrossRef]
  58. Goldstein, H.; Poole, C.; Safko, J. Classical Mechanics; Addison Wesley: San Francisco, CA, USA, 2002. [Google Scholar]
  59. Langson, W.; Alleyne, A. A stability result with application to nonlinear regulation: Theory and experiments. In Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251), San Diego, CA, USA, 2–4 June 1999; Volume 5, pp. 3051–3056. [Google Scholar]
  60. Reyhanoglu, M.; van der Schaft, A.; McClamroch, N.H.; Kolmanovsky, I. Dynamics and control of a class of underactuated mechanical systems. IEEE Trans. Autom. Control 1999, 44, 1663–1671. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of a planar PPR robot.
Figure 1. Schematic representation of a planar PPR robot.
Electronics 12 02026 g001
Figure 2. Performance of the PPR robot for sinusoidal trajectory tracking (see Case I: Tracking Sinusoidal Trajectories).
Figure 2. Performance of the PPR robot for sinusoidal trajectory tracking (see Case I: Tracking Sinusoidal Trajectories).
Electronics 12 02026 g002
Figure 3. Control gains for Case I (Tracking Sinusoidal Trajectories).
Figure 3. Control gains for Case I (Tracking Sinusoidal Trajectories).
Electronics 12 02026 g003
Figure 4. All the estimated disturbances for Case I (Tracking Sinusoidal Trajectories).
Figure 4. All the estimated disturbances for Case I (Tracking Sinusoidal Trajectories).
Electronics 12 02026 g004
Figure 5. Input forces and torque for Case I (Tracking Sinusoidal Trajectories).
Figure 5. Input forces and torque for Case I (Tracking Sinusoidal Trajectories).
Electronics 12 02026 g005
Figure 6. Performance of the PPR robot for constant trajectory tracking (see Case II: Tracking Constant Trajectories).
Figure 6. Performance of the PPR robot for constant trajectory tracking (see Case II: Tracking Constant Trajectories).
Electronics 12 02026 g006
Figure 7. Control gains for Case II (Tracking Constant Trajectories).
Figure 7. Control gains for Case II (Tracking Constant Trajectories).
Electronics 12 02026 g007
Figure 8. All the estimated disturbances for Case II (Tracking Constant Trajectories).
Figure 8. All the estimated disturbances for Case II (Tracking Constant Trajectories).
Electronics 12 02026 g008
Figure 9. Input forces and torque for Case II (Tracking Constant Trajectories).
Figure 9. Input forces and torque for Case II (Tracking Constant Trajectories).
Electronics 12 02026 g009
Figure 10. Performance of the PPR robot for trajectory tracking under large external disturbances (see Case III: Trajectory Tracking under Large External Disturbances).
Figure 10. Performance of the PPR robot for trajectory tracking under large external disturbances (see Case III: Trajectory Tracking under Large External Disturbances).
Electronics 12 02026 g010
Figure 11. Control gains for Case III (Trajectory Tracking under Large External Disturbances).
Figure 11. Control gains for Case III (Trajectory Tracking under Large External Disturbances).
Electronics 12 02026 g011
Figure 12. All the estimated disturbances for Case III (Trajectory Tracking under Large External Disturbances).
Figure 12. All the estimated disturbances for Case III (Trajectory Tracking under Large External Disturbances).
Electronics 12 02026 g012
Figure 13. Input forces and torque for Case III (Trajectory Tracking under Large External Disturbances).
Figure 13. Input forces and torque for Case III (Trajectory Tracking under Large External Disturbances).
Electronics 12 02026 g013
Table 1. Parameters of PPR robot system.
Table 1. Parameters of PPR robot system.
SymbolParameterValueUnit
m 1 Link 1 mass5kg
m 2 Link 2 mass10kg
m 3 Link 3 mass15kg
J 3 MOI of link 3 about its CoM1.5kg·m 2
lDistance b/w revolute joint and CoM of link 3 0.5 m
Table 2. Estimated parameters of PPR robot system.
Table 2. Estimated parameters of PPR robot system.
SymbolParameterValueUnit
m 1 Link 1 mass4kg
m 2 Link 2 mass8kg
m 3 Link 3 mass12kg
J 3 MOI of link 3 about its CoM1.7kg·m 2
lDistance b/w revolute joint and CoM of link 3 0.8 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reyhanoglu, M.; Jafari, M. A Simple Learning Approach for Robust Tracking Control of a Class of Dynamical Systems. Electronics 2023, 12, 2026. https://doi.org/10.3390/electronics12092026

AMA Style

Reyhanoglu M, Jafari M. A Simple Learning Approach for Robust Tracking Control of a Class of Dynamical Systems. Electronics. 2023; 12(9):2026. https://doi.org/10.3390/electronics12092026

Chicago/Turabian Style

Reyhanoglu, Mahmut, and Mohammad Jafari. 2023. "A Simple Learning Approach for Robust Tracking Control of a Class of Dynamical Systems" Electronics 12, no. 9: 2026. https://doi.org/10.3390/electronics12092026

APA Style

Reyhanoglu, M., & Jafari, M. (2023). A Simple Learning Approach for Robust Tracking Control of a Class of Dynamical Systems. Electronics, 12(9), 2026. https://doi.org/10.3390/electronics12092026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop