Next Article in Journal
The Normalized Direct Trigonometry Model for the Two-Dimensional Irregular Strip Packing Problem
Previous Article in Journal
An Enhanced Reptile Search Algorithm for Inverse Modeling of Unsaturated Seepage Parameters in Clay Core Rockfill Dam Using Monitoring Data during Operation
Previous Article in Special Issue
Adaptive Iterative Learning Constrained Control for Linear Motor-Driven Gantry Stage with Fault-Tolerant Non-Repetitive Trajectory Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Visual Control for Robotic Manipulators with Consideration of Rigid-Body Dynamics and Joint-Motor Dynamics

by
Zhitian Chen
1,
Weijian Wen
2,* and
Weijun Yang
2
1
School of Automation, Guangdong University of Technologyy, Guangzhou 510006, China
2
School of Intelligent Manufacturing, Guangzhou City Polytechnic, Guangzhou 510405, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2413; https://doi.org/10.3390/math12152413
Submission received: 3 July 2024 / Revised: 18 July 2024 / Accepted: 29 July 2024 / Published: 2 August 2024
(This article belongs to the Special Issue Application of Mathematical Method in Robust and Nonlinear Control)

Abstract

:
A novel cascade visual control scheme is proposed to tailor for electrically driven robotic manipulators that operate under kinematic and dynamic uncertainties, utilizing an uncalibrated stationary camera. The proposed control approach incorporates adaptive weight radial basis function neural networks (RBFNNs) to learn the behaviors of the uncertain dynamics of the robot and the joint actuators. The controllers are designed to nullify the approximation error and mitigate unknown disturbances through an integrated robust adaptive mechanism. A major advantage of the proposed approach is that prior knowledge of the dynamics of the robotic manipulator and its actuator is no longer required. The controller autonomously assimilates the robot and actuator dynamics online, thereby obviating the need for fussy regression matrix derivation and advance dynamic measurement to establish the adaptive dynamic parameter update algorithm. The proposed scheme ensures closed-loop system stability, bounded system states, and the convergence of tracking errors to zero. Simulation results, employing a PUMA manipulator as a testbed, substantiate the viability of the proposed control policy.

1. Introduction

Visual servoing control, leveraging the capabilities of visual cameras as efficient sensors, has emerged as a pivotal technique in robotic manipulation. Widely applied across various intelligent fields, this method plays a critical role in enhancing the precision and adaptability of robotic systems in numerous industries [1,2,3,4]. Hence, it is routinely deployed to tackle tasks of complexity and elevated difficulty [5,6]. With the development of visual control methods, a variety of control strategies have been explored and established. At present, mainstream visual servoing control methods are primarily categorized into two types, based on the differences in the construct approach of their error functions. The first type, position-based visual servoing (PBVS), calculates control errors based on the variance between the current and desired poses, as relayed by corresponding image features [3,7,8,9]. This method fundamentally relies on the mapping relationship between the pose and the image, which makes it notably sensitive to the precision of visual camera parameters and susceptible to disturbances from image noise. In contrast, image-based visual servoing (IBVS) derives system error directly from image feedback features, rendering it more robust and reliable compared with PBVS [10,11,12,13]. Given these distinct advantages and limitations, some researchers have proposed a combined approach, hybrid visual servoing (HVS) [14,15,16]. This method integrates error types from both PBVS and IBVS into a cohesive control policy [17]. Nevertheless, the effectiveness of this hybrid approach, as with all visual servoing techniques, remains dependent on the measurement accuracy and parameter settings of the visual system. Therefore, uncalibrated visual servoing has emerged as a significant research focus within the field of visual control [18].
To address the problem of the uncalibrated camera, a 2 1/2D visual servoing scheme based on the estimate method of the homography matrix was developed in [19]. In [20], a flexible camera calibrated method for computer vision is proposed. Additionally, a planar robot visual servoing control scheme with the robustness of the radial aberrations in the lens and indeterminate camera orientation is presented in [21]. Ref. [22] proposed to utilize the epipolar geometry to compute the goal location and the translate direction with uncalibrated cameras and robots. However, the control policies previously mentioned are solely predicated upon kinematic controllers, rather than the dynamics of the system. It is widely acknowledged that models of robotic dynamics incorporate an extensive array of the manipulator physical parameters, consisting of nonlinear forces, friction, and inertia.
Therefore, a significant number of visual controllers based on robotic nonlinear dynamics have been presented. As opposed to the kinematics-based methods, the dynamics-based approaches rely on the analytical approach to derive the unknown parameters of the explicit Jacobian matrix and design a matched algorithm to achieve the purpose of system self-calibration. In [23], an adaptive estimate approach for the parameter of the uncalibrated visual system is proposed to design a dynamic controller for planar robots. Informed by this research, a depth-independent image interaction matrix, which can map the image information onto the joint space of the robot without the lacking depth information, is proposed in [24]. The corresponding parameterized method and the adaptive parameter update approach are also developed in the study. Additionally, projective homography-based uncalibrated visual servoing is proposed in [25], which constructs a novel task function from projection theory with prior knowledge of the camera parameters.
Although the aforementioned solution proposes a comprehensive control strategy, the operational dynamics of the robot are subject to variability due to factors such as the interchange and installation of grippers and other attachments. These factors can induce alterations in the physical parameters of the robot, thereby introducing uncertainties in its kinematics and dynamics. To address it, a two-layer control scheme incorporating a model predictive strategy and an extended state observer is presented for multi-manipulator trajectory tracking systems with parametric uncertainties in [26]. To overcome the uncertainties of the dynamics from the highly kinematically redundant robot, an adaptive sliding model with an online dynamic estimator is designed in [27]. In [28], a novel method, merging feedback linearization with a multilayer neural network (MNN)-based observer, is introduced for precise robotic manipulator control, bypassing complex uncertainty modeling. For the uncalibrated visual servoing control system, a hybrid estimation method, which estimates the unknown camera parameters and the robotic manipulator, is proposed to achieve the purpose of eliminating uncertainty in the study [29], as an extension of the study [24].
However, the aforementioned studies are based on the premise that the joint actuators of robotic manipulators function under ideal input–output conditions. In practice, as the robotic manipulator executes movements, the physical parameters of the joint actuator motors undergo alterations due to factors such as thermal drift, thereby introducing uncertainties into the motor model [30,31,32,33,34,35]. In [32], a controller based on the differential equation approximation method is proposed. Although it addresses the uncertainties in dynamics, the approximation parameters of the differential equation need to be selected through experimentation, which is undoubtedly a complex and costly task. In [34], an inverse Jacobian and an actuator adaptation law are proposed to overcome the uncertain kinematic, dynamic, and actuator parameters of the robotic manipulator, which establish the adaptive rule in virtue of the regressor matrix and only consider the DC motor whose torque is proportional to voltage. The research study of [36], which presents a fuzzy logic system to compensate for the uncertainties of the system, neglects the joint torque error, affected by the uncertainties of the mechanical and physical parameters of the robot. But adaptive fuzzy systems necessitate meticulous calibration of membership functions and rule sets, a process that is inherently complex and requires specialized expertise. An adaptive visual servoing control for electronically driven robotic manipulators is investigated in the work [35], but the provided scheme contains the time derivative of the desired armature current, which is quite difficult to compute, since it involves the derivative of the adaptive parameter and the adaptive estimation of the derivative of parameters.
Motivated by the aforementioned considerations, this study investigates the problem of visual control for robotic manipulators, focusing on the uncertain kinematics and dynamics of the robot and its actuator motor, using an uncalibrated camera. By means of the depth-independent composite Jacobian matrix proposed in [29], a kinematic pose controller is constructed and applied to compute the reference joint angle velocity by the image error feedback. The parameters of this virtual controller are estimated by an adaptive update rule. To track the reference joint velocity and compensate for the uncertainties of the robotic dynamics, a torque controller with the adaptive RBFNN is proposed. Further, a voltage controller, which also includes the RBFNN approximator to eliminate stakes of the uncertain motor dynamics, is designed to track the torque control output as the reality system input. The weight of the RBFNNs is updated by the adaptive algorithm through the tracking error feedback. To eliminate the approximator error and the disturbance in the uncalibrated environment, an adaptive robust mechanism is incorporated separately into the torque controller and voltage controller. Ultimately, the system states and the tracking errors of the different closed loops are globally asymptotically stable under the control scheme proposed. In summary, the work of this study has the following contributions and novelties:
  • Compared with existing uncalibrated visual control schemes for robot manipulators, e.g., [23,24,25], the uncertainties in kinematics and dynamics and the influences of joint-motor dynamics are further considered in design and analysis and well addressed by our newly proposed neural network-based IBVS strategy.
  • Although some attempts have been made in developing uncalibrated visual control schemes capable of accommodating motor dynamics (see [35] for examples), the implementation of the developed control algorithm requires the time derivatives of the regressor matrix and many other signals, resulting in a large computational burden. Such a drawback is removed with our scheme by making full use of the universal approximation property of neural networks.
  • A robust adaptive mechanism is newly designed and added in our visual control scheme, with which good robustness of the proposed control scheme to neural network approximation errors and exogenous disturbances can be well established.
The rest of this paper are organized as follows: In Section 2, the vision system model and the dynamic equation of the robotic manipulator are given, based on which a cascade image-based adaptive visual control scheme is established in Section 3. In Section 4, a rigorous stability analysis of the proposed scheme is provided based on the Lyapunov argument. Additionally, the effectiveness of our scheme is also verified by the simulation results, as seen in Section 5. We conclude the whole paper in Section 6.
Notation 1.
To maintain clarity throughout this paper, we establish a set of notation rules and define the key symbols used herein (Table 1). In this paper, matrices and vectors are represented by bold letters. The notation E m signifies an identity matrix of dimensions m × m . Furthermore, the caret symbol ˆ placed above a matrix, vector, or scalar designates its adaptive estimate.

2. Problem Statement and Analysis

In this study, a stationary camera with uncertain external and internal parameters, positioned at an undetermined location, serves as a vision feedback sensor to control a robotic manipulator. This manipulator operates under the uncertainty of both robotic kinematics and dynamics, including the dynamic parameters of the unknown servo motors in the robotic manipulator joints. Moreover, the joint actuators and their motors are subject to various non-differentiable nonlinear disturbances, which introduce additional unknown disturbances into the dynamic control system. The primary control objective is to enable the precise positioning of the robotic end-effector at a specific image location, employing uncalibrated visual feedback despite these challenging conditions. The system frame is established as shown in Figure 1. The following subsection addresses these issues through a detailed discussion of each, further exploring the implications and potential solutions.

2.1. Kinematics

In this subsection, the kinematic relationship between the robot and the camera is discussed. Two coordinate frames are established for delineating the visual robot system: the world frame, positioned at the base of the robotic manipulator, and the visual frame, located at the camera center. The vector p R 4 × 1 represents the spatial position of the feature point marked on the end-effector, and the vector q R n × 1 signifies the positions of the manipulator joints. The transform relationship between the world frame and the visual frame is delineated as
p c = Tp ,
the matrix T is the homogeneous transformation matrix that represents the camera extrinsic parameters. Incorporating the intrinsic parameters of the stationary camera, the image coordinate y R 3 × 1 under the projection theory is described as
y = 1 z Ξ p ,
where Ξ R 3 × 4 represents the unknown parameters of the camera and constitutes the projection matrix, which is the product of the intrinsic and extrinsic parameter matrices. This equation is the perspective projection equation of the stationary camera. The image position vector is denoted in homogeneous form:
y = u ( t ) v ( t ) 1 .
where y ( t ) represents the projection coordinates of the feature point on the camera image plane. In addition, z represents the depth of the feature point with respect to the visual frame, which is a time-varying value. By the projection theory, the depth value can be computed by
z = ξ 3 p ,
Further, the Jacobi mapping relationship between the joint velocity space and the end-effector space is established as
p ˙ = J q ˙
where J is the robotic kinematic Jacobi matrix. In traditional approaches, the image Jacobian mapping matrix relies on the depth information, z ( t ) , of the feature point within the visual frame. However, in the context of an uncalibrated visual servoing control system, this depth feature cannot be directly accessed. To circumvent the issue of unknown depth, a depth-independent interaction matrix is introduced. Through the derivative of the projection model (2) and the depth relationship (4), we obtain the following velocity relationship between the image plane and spatial position:
y ˙ = 1 z [ Ξ p ˙ y z ˙ ] = 1 z ( Ξ y ξ 3 ) p ˙ ,
where ξ i is the i row of the matrix Ξ . The novel Jacobi matrix is represented as
A ( y ) = Ξ y ξ 3 T ,
which is called the depth-independent interaction matrix. The depth-independent interaction matrix depends solely on the image coordinates and the camera parameters. It is important to note that due to the uncertainties associated with the camera parameters and the robotic kinematics, both the joint kinematic Jacobian matrix and the depth-independent interaction matrix exhibit uncertainty in an uncalibrated environment. To facilitate the parametrization and estimation of these parameters, an auxiliary matrix is established. This matrix defines the mapping relationship between the image velocity space and the joint velocity space, enabling parameter estimation through a specified estimation rule. This matrix is denoted by
Λ ( q , y ) = A ( y ) J ( q ) .
The matrices A ( y ) and Λ ( q , t ) are linearized by the unknown camera parameters. Consequently, through further derivation, the following construction of Λ ( q , y ) can be obtained:
Λ ( q , y ) = ξ 1 T j 1 y 1 ξ 3 T j 1 ξ 1 T j 2 y 1 ξ 3 T j 2 ξ 1 T j 3 y 1 ξ 3 T j 3 ξ 2 T j 1 y 2 ξ 3 T j 1 ξ 2 T j 2 y 2 ξ 3 T j 2 ξ 2 T j 3 y 2 ξ 3 T j 3 0 0 0 ,
where j i is the i column of the kinematic Jacobi matrix and y 1 is the ith element of the feature point image projection coordinates. Λ ( q , y ) is called the depth-independent compound matrix. Hence, the image projection interaction relationship is reformulated as
y ˙ = 1 z Λ ( q , y ) q ˙ ,
and the depth feature is recast as
z = λ ( q ) q ˙ ,
where λ ( q ) is defined as the following form:
λ ( q ) = ξ 3 T j 1 ξ 3 T j 2 ξ 3 T j 3 .
The matrix Λ ( q , y ) and the vector λ ( q ) include all the unknown parameters of both the robotic manipulator and the camera. Since the combination of the unknown kinematic parameters is linear to the other parameters, the following propositions are given as
Proposition 1.
For any product ξ i T j k , the unknown parameter state can be rewritten as
ξ i T j k = φ i k T θ ,
where φ i k T contains no unknown parameters of the manipulator or the camera and all unknown parameters are reconstructed into the vector θ. The unknown parameters of both the manipulator and the camera are compactly consolidated within the vector θ.
Proposition 2.
For any vector o R 4 × 1 , the depth-independent compound matrix Λ ( q , y ) o and the vector λ o can be parameterized as
Λ ( q , y ) o = χ 1 ( o , q , y ) θ ,
λ ( q , y ) o = χ 2 ( o , q ) θ ,
where χ 1 ( o , q , y ) and χ 2 ( o , q ) are regression matrices.
Remark 1.
In this paper, the intrinsic and extrinsic parameters of the used camera are assumed to be unknown. To handle the uncertainty in camera parameters, an adaptive control technique shall be employed in this work, under which, instead of the real values of the parameters, their adaptive estimates are involved in the controller, which will be updated online by the corresponding adaptive laws. In this sense, we only need the approximate values of the unknown parameters to perform initialization of adaptive laws, as seen later on.

2.2. Dynamics

The control policy is formulated based on the dynamics rather than solely on the robotic kinematics in this research. The Lagrangian dynamic equations are utilized to model the motion of the robotic system. The robotic rigid dynamic equation is formulated as follows:
M ( q ) q ¨ + ( 1 2 M ˙ ( q ) + C ( q , q ˙ ) ) q ˙ + g ( q ) = τ ,
where M ( q ) and C ( q , q ˙ ) represent the inertia and the Coriolis force, respectively; g ( q ) is the vector representing the gravitational force; and τ is the torque input for the robotic manipulator. In our control strategy, the joint torque is generated by the joint actuator motor. Furthermore, for any homogeneous vector, the following property holds:
c T C ( q , q ˙ ) c = 0 .
Note that the uncertainties in the robotic dynamics mean that M ( q ) , C ( q , q ˙ ) , and g ( q ) are uncertain. Subsequently, the joint actuator motors of the robotic manipulator are introduced, with the specific type of motors discussed in this study being direct current (DC) motors. Based on the voltage–current and torque equations of DC motors [37], it is proposed to utilize the motor dynamic equation with the following form:
L τ ˙ + R τ + N q ˙ = u ,
where u is the voltage input of the joint motor. L is the motors armature inductance matrix. R represents the resistance of the motors. N is the motor electromechanical conversion coefficient. All three matrices are diagonally positive definite. Like the dynamics of the robotic manipulators, the matrices L , R , and N all are uncertain. In our control scheme, the actual control inputs are the voltage inputs to the motors that actuate the individual joints.
Remark 2.
Compared with the control scheme in [24,29], which only considers the torque controller design, the uncertain dynamics of the joint actuator motor is taken into consideration in this research. Since the motor dynamics have the first-order time derivative of the output torque, the ideal output model relies on the precise system parameters, which means that the uncertain motor physical parameters directly affect the torque-tracking performance of the controller. Hence, we conduct further research on this aspect in this paper.

3. Control Design

In this section, we propose a control framework that includes two virtual controllers designed to compute the reference joint velocity and the desired motor output torque, respectively, and a motor voltage controller to track the desired torque. In response to the uncertainties in kinematics and dynamics, coupled with unknown camera projection parameters and disturbances, five adaptive parameter update mechanisms are developed to accurately estimate these unknown parameters.

3.1. Controller

The desired image position of the feature point is denoted by y d . Then, the image error between the current position and the desired position of the feature point is defined as
Δ y = y y d .
By invoking the depth-independent image compound Matrices (9) and (12), the image error is effectively mapped into the joint velocity space. Based on the image error, a virtual controller designed for the real-time computation of reference joint angular velocity is designed as
q ˙ d = ( Λ ^ T + 1 2 λ ^ 3 T Δ y ( t ) ) K Δ y ( t ) ,
where Λ ^ and λ ^ 3 are the estimations of the depth-independent compound matrices Λ and λ 3 , and K is the image error gain positive definite diagonal matrix.
To track the reference joint velocity by the joint torque controller, the joint velocity error needs to be controlled and is defined as
Δ q ˙ = q ˙ q ˙ d .
Taking into account the uncertainties inherent in the dynamics of the robotic manipulator, a new dynamic term is constructed. This formulation is derived by invoking the established robotic dynamic equation, as referenced in (16), and is presented in the following form:
H 1 ( q ˙ d , q ¨ d , q ) = M ( q ) q ¨ d + 1 2 M ˙ ( q ) q ˙ d + C ( q , q ˙ ) q ˙ d + g ( q ) .
This term is designed as the rigid dynamic compensation term. The novel dynamic term includes the unknown physical parameters, which cannot linearize parameters. To overcome it, we propose to employ the RBFNN to approximate it.
H 1 ( q ˙ d , q ¨ d , q ) = W 1 T Ψ 1 ( q ˙ d , q ¨ d , q ) + δ 1 ,
where W 1 T R 3 × N 1 is the ideal weight matrix of the RBF; N 1 is the node number of the RBF network, which is an adjustive constant; and δ 1 is the rigid dynamic approximator estimate error. By invoking it, the second virtual controller designed for dynamic torque control is proposed as
τ d = W ^ 1 T Ψ 1 ( q ˙ d , q ¨ d , q ) + q ˙ d K 1 Δ q ˙ + τ 0
where W ^ 1 is the estimate weight matrix updating by the adaptive rule and K 1 R n × n is the velocity diagonal gain matrix. δ R n × 1 is the estimated error of the RBF approximator. τ 0 is the adaptive robust compensator, which is given in the follow-up discussion. After the above virtual controller design, the real control output is proposed here. The torque-tracking error of the motor is denoted by
Δ τ = τ τ d .
Similarly, the dynamic equation for the joint DC motors, as referenced in (18), is also invoked. A new motor dynamic term is formulated as follows:
H 2 ( τ ˙ d , τ , q ˙ , q ˙ d ) = L τ ˙ d + R τ + N q ˙ .
As in (23), the unknown physical parameters of the motor are estimated by the RBFNN approximator to compensate for the tracking performance. The ideal weight RBF approximator is presented as
H 2 ( τ ˙ d , τ , q ˙ , q ˙ d ) = W 2 T Ψ 2 ( τ ˙ d , τ , q ˙ , q ˙ d ) + δ 2 .
where W 1 T R 3 × N 1 is the ideal weight of the second RBFNN and δ 2 is the motor dynamic approximator estimate error. To track the desired torque, the motor voltage input controller is designed as
u = W ^ 2 T Ψ 2 ( τ ˙ d , τ , q ˙ , q ˙ d ) K 2 Δ τ Δ q ˙ + u 0 ,
where the matrix K 2 is the torque error control gain. u 0 is the adaptive robust term in this controller.
Remark 3.
To improve the tracking performance, the control scheme is divided into three closed-loop feedback subsystems, as showcased in Figure 2. The practical controller output signal is the motor input voltage. The kinematic controller and the torque controller are virtual controllers for auxiliary calculation.
Remark 4.
To approximate the unknown nonlinear relationship of the dynamic equations, we propose to employ the RBFNN to learn the behavior of the rigid dynamics and the motor dynamics. The RBF approximator does not require prior knowledge about the system model and the parameter range, which avoids the assumption “unknown parameters are linear” and the tedious derivation of the establishment of regression matrices in the adaptive control policy. The other advantage of the RBF approximator is that it can avoid taking the higher derivative of the unknown dynamic parameters.
The unknown disturbance in the dynamic system is nonnegligible in the process of controller design. Hence, the unknown disturbances d 1 and d 2 are taken into consideration at the joint torque controller and the motor voltage controller. In actual physical systems, the impact of disturbances is often limited. Based on this premise, we propose the following assumption.
Assumption 1.
The absolute value of unknown disturbances d i has an upper bound D i in the dynamic system, and the upper bound is unknown.
Due to the fact that the absolute value of the RBF approximate errors also has an upper bound, the summary of the disturbance and the approximate error can be denoted by ω i and have the unknown upper bound Ω i . Hence, the controllers are further rewritten as
τ d + ω 1 = W ^ 1 T Ψ 1 + q ˙ d K 1 Δ q ˙ + ω 1 + τ 0 ,
u + ω 2 = W ^ 2 T Ψ 2 K 2 Δ τ Δ q ˙ + ω 2 + u 0 .
Here, to eliminate adverse impact terms ω i , the adaptive robust compensators proposed above are designed as
τ 0 = Ω ^ 1 s g n ( Δ q ˙ ) ,
u 0 = Ω ^ 2 s g n ( Δ τ ˙ ) ,
where Ω ^ i , which is defined as the adaptive robust parameter, is the estimation of the upper bound, and s g n ( ) is the sign function.
Remark 5.
The RBFNN approximation errors and the unknown disturbances directly impact the system stability and the convergence of system states and errors. A robust adaptive mechanism can ensure the system resistance to disturbances and maintain convergence performance.
Subsequently, the closed-loop dynamic system is delineated, incorporating the controller. Upon substituting the controllers into the dynamic equations, the dynamic systems are rewritten as follows:
M ( q ) q ¨ + ( 1 2 M ˙ ( q ) + C ( q , q ˙ ) ) q ˙ + g ( q ) = W ^ 1 T Ψ 1 + q ˙ d K 1 Δ q ˙ + ω 1 + τ 0 + Δ τ ,
L τ ˙ + R τ + N q ˙ = W ^ 2 T Ψ 2 K 2 Δ τ Δ q ˙ + ω 2 + u 0 .
The RBFNN weight error is defined as W ˜ = W W ^ . Hence, by combing Equations (22), (23), (26), and (27), the above equations are rewritten as
M ( q ) Δ q ¨ + ( 1 2 M ˙ ( Δ q ) + C ( q , q ˙ ) ) Δ q ˙ = W ˜ 1 T Ψ 1 q ˙ d K 1 Δ q ˙ + ω 1 + τ 0 + Δ τ ,
L Δ τ ˙ = W ˜ 2 T Ψ 2 K 2 Δ τ Δ q ˙ + ω 2 + u 0 .
By left-multiplying (35) by Δ q ˙ T and (36) by Δ τ ˙ T , the closed-loop dynamic equations are constructed as follows:
Δ q ˙ T M ( q ) Δ q ¨ + 1 2 Δ q ˙ T M ˙ ( Δ q ) Δ q ˙ = Δ q ˙ T W ˜ 1 T Ψ 1 Δ q ˙ T q ˙ d Δ q ˙ T K 1 Δ q ˙ + Δ q ˙ T ω 1 + Δ q ˙ T τ 0 + Δ q ˙ T Δ τ ,
Δ τ T L Δ τ ˙ = Δ τ T W ˜ 2 T Ψ 2 Δ τ T K 2 Δ τ Δ τ T Δ q ˙ + Δ τ T ω 2 + Δ τ T u 0 .

3.2. Adaptive Update Method

To update the unknown parameter estimation in real-time while the controller drives the manipulator, various adaptive parameter update methods are introduced in this subsection. To compute the desired joint velocity and guarantee the robustness and the astringency of the image error, the prior knowledge regarding the known range of the unknown parameter θ , which is not hard to obtain by a rough measurement, is defined by convex compact sets Θ . It is given as
X = { θ ^ i ( 0 ) θ ^ ( 0 ) | θ ¯ i < θ ^ i ( 0 ) < θ ¯ i } ,
where θ i ¯ is the lower bound of the convex set and θ ¯ i is the upper bound. The estimated error of compound parameters θ is defined as
( Δ Λ T + 1 2 Δ λ 3 T Δ y ( t ) ) Δ y ( t ) = ( Λ ^ T + 1 2 λ ^ 3 T Δ y ( t ) ) Δ y ( t ) ( Λ T + 1 2 λ 3 T Δ y ( t ) ) K Δ y ( t ) .
Further, by applying Proposition 2, the linear unknown parameter term is rewritten as
( Δ Λ T + 1 2 Δ y ( t ) Δ λ 3 ) T K Δ y ( t ) = Z ( q , y ) Δ θ .
According to the above derivation, the adaptive parameter update rule for the unknown mapping relationship between the camera and the manipulator is designed as
θ ^ ˙ = Υ 1 Z T ( q , y ) q ˙ + η ,
where Υ is a positive definite and diagonal matrix, and η is constructed as
η i = if θ ¯ i < θ ^ i < θ ¯ i 0 , or θ ^ i = θ ¯ i and φ i 0 or θ ^ i = θ ¯ i and φ i 0 φ i , otherwise
where the term φ i is defined as
φ = Υ 1 Z T ( q , y ) q ˙ .
For the torque and the voltage controller, the corresponding RBF approximate weight matrices are updated by following the parameter update law:
W ^ ˙ 1 = Γ 1 Ψ 1 Δ q ˙ T ,
W ^ ˙ 2 = Γ 2 Ψ 2 Δ τ T ,
where the matrices Γ 1 and Γ 2 are skew symmetric positive definite update gain matrices.
The parameters of the robust compensators are estimated by
Ω ^ ˙ 1 = B 1 1 | Δ q ˙ | ,
Ω ^ ˙ 2 = B 2 1 | Δ τ | ,
where the matrices B 1 and B 2 are skew symmetric positive definite matrices.

4. System Stability

The efficacy of the proposed cascade controller is validated in this section by using the second Lyapunov method. The stability of the control system is given.
Theorem 1.
With controllers (20), (24), and (28) with adaptive rule (42) and (45)–(48) driving, the system stability and the closed-loop control task lim t Δ y = 0 of the visual manipulator with uncertainties in the kinematics and the dynamics and the unknown disturbance can be guaranteed, and the system state errors satisfy uniform boundedness (Algorithm 1).
Algorithm 1: Cascade adaptive visual servoing control algorithm.
Mathematics 12 02413 i001
Proof. 
The Lyapunov function applied to the stability proof is constructed as
V ( t ) = 1 2 { z ( t ) Δ y T K Δ y + Δ q ˙ T H Δ q ˙ + Δ τ T L Δ τ
+ Δ θ T Υ Δ θ + { Ω ˜ i T B i Ω ˜ i + T r { W ˜ i T Γ i W ˜ i } } } .
The system Lyapunov function is divided into three parts, corresponding to three controllers (20), (24), and (28), which are denoted by
V 1 ( t ) = 1 2 { z ( t ) Δ y T K Δ y + Δ θ T Υ Δ θ } ,
V 2 ( t ) = 1 2 { Δ q ˙ T H Δ q ˙ + Ω ˜ 1 T B i Ω ˜ 1 + T r { W ˜ 1 T Γ 1 W ˜ 1 } } ,
V 3 ( t ) = 1 2 { Δ τ T L Δ τ + Ω ˜ 2 T B 2 Ω ˜ 2 + T r { W ˜ 2 T Γ 2 W ˜ 2 } } .
Regarding the first Lyapunov subfunction, through the operation of differentiation, we can derive
V ˙ 1 ( t ) = 1 2 z ˙ ( t ) Δ y T K Δ y + z ( t ) Δ y T K Δ y ˙ + Δ θ T Υ θ ^ ˙
To facilitate further derivation, the auxiliary term q ˙ T q d is introduced into the derivation process. Then,
V ˙ 1 ( t ) = 1 2 z ˙ ( t ) Δ y T K Δ y + z ( t ) Δ y T K Δ y ˙ + Δ θ T Υ θ ^ ˙ + q ˙ T q d q ˙ T q d = q ˙ ( Λ T + 1 2 λ 3 T Δ y ( t ) ) K Δ y ( t ) + Δ θ T Υ θ ^ ˙ + q ˙ T q d q ˙ T ( Λ ^ T + 1 2 λ ^ 3 T Δ y ( t ) ) Δ y ( t ) .
By combining adaptive rule (42), this subfunction is derived as
V ˙ 1 ( t ) = q ˙ T q d + Δ θ T Y η .
Following [29], we can know that
Δ θ T Y η 0 .
Hence, we obtain
V ˙ 1 ( t ) q ˙ T q d .
Further, the second Lyapunov Function (52) is subjected to processing. It is differentiated, and closed-loop robotic dynamic Equation (37) is invoked.
V ˙ 2 ( t ) = Δ q ˙ T M Δ q ¨ + 1 2 Δ q ˙ T M ˙ Δ q ˙ + Ω ˜ 1 T B 1 Ω ˜ 1 + T r { W ˜ 1 T Γ 1 W ˜ 1 } = Δ q ˙ T W ˜ 1 T Ψ 1 + Δ q ˙ T q ˙ d Δ q ˙ T K 1 Δ q ˙ + Δ q ˙ T ω 1 + Δ q ˙ T τ 0 + Δ q ˙ T Δ τ + Ω ˜ 1 T B 1 Ω ˜ 1 + T r { W ˜ 1 T Γ 1 W ˜ ˙ 1 } .
We combine RBF update rule (45) and adaptive robust compensator (31) with the corresponding update law (47):
V ˙ 2 ( t ) = Δ q ˙ T W ˜ 1 T Ψ 1 + Δ q ˙ T q ˙ d Δ q ˙ T K 1 Δ q ˙ + i n { Δ q ˙ i T ( ω 1 i Ω ^ 1 i sgn ( Δ q ˙ i ) ) } + Δ q ˙ T Δ τ + Ω ˜ 1 T | Δ q ˙ | + T r { W ˜ 1 T Γ 1 W ˜ ˙ 1 } Δ q ˙ T W ˜ 1 T Ψ 1 Δ q ˙ T q ˙ d Δ q ˙ T K 1 Δ q ˙ + i n { Δ q ˙ i T Ω Ω ^ Δ q ˙ i } + Δ q ˙ T Δ τ + Ω ˜ 1 T | Δ q ˙ | + T r { W ˜ 1 T Γ 1 W ˜ ˙ 1 } Δ q ˙ T q ˙ d Δ q ˙ T K 1 Δ q ˙ + Δ q ˙ T Δ τ .
For the Lyapunov subfunction associated with the motor section in (53), we derive its derivative as
V ˙ 3 ( t ) = Δ τ T L Δ τ ˙ + Ω ˜ 2 T B 2 Ω ˜ ˙ 2 + T r { W ˜ 2 T Γ 2 W ˜ ˙ 2 } .
Analogously, by combining closed-loop dynamics (38) and adaptive rules (46) and (48), we can derive
V ˙ 3 ( t ) = Δ τ T W ˜ 2 T Ψ 2 Δ τ T K 2 Δ τ Δ τ T Δ q ˙ + Δ τ T ω 2 + Δ τ T u 0 + Ω ˜ 2 T B 2 Ω ˜ ˙ 2 + T r { W ˜ 2 T Γ 2 W ˜ ˙ 2 } Δ τ T K 2 Δ τ + i n { Δ τ i Ω 2 i Ω ^ 2 i Δ τ i } + Ω ˜ 2 T | Δ τ ˙ | Δ τ T Δ q ˙ Δ τ T K 2 Δ τ Δ τ T Δ q ˙ .
Furthermore, we can directly obtain that the derivative of the Lyapunov function for the entire system satisfies the following constraints:
V ˙ ( t ) Δ τ T K 2 Δ τ Δ q ˙ T K 1 Δ q ˙ q ˙ d T q ˙ d .
Hence, V 0 and V ˙ 0 are proven. The Lyapunov function V is bound. Further, by applying Barbalat’s lemma, we obtain
lim t q ˙ d = 0 ,
lim t Δ q = 0 ,
lim t Δ τ = 0 .
By combing (64) and (65), it is easy to derive
lim t q ˙ = 0 .
At the equilibrium point, the reference joint velocity (20) is cited again. Based on the inference mentioned above, we can obtain
( Λ ^ T + 1 2 λ ^ 3 T Δ y ( t ) ) Θ ( θ , y ) K Δ y ( t ) = 0 .
A new matrix is constructed as
Θ T Θ = N 0 2 × 1 0 1 × 2 0 .
Hence, R a n k ( Θ T Θ ) = R a n k ( N ) can be known. If the matrix N is singular, the following constraint must be satisfied, i.e.,
Θ 11 × Θ 22 Θ 12 × Θ 21 = 0 , Θ 11 × Θ 23 Θ 13 × Θ 21 = 0 , Θ 12 × Θ 23 Θ 13 × Θ 22 = 0 ,
which is impossible to satisfied under the complex set of constraints in the adaptive rule. Hence we can obtain the conclusion that the matrix N is invertible, and the rank of Θ is 2. Therefore, we conclude as follows:
lim t Δ y = 0 .
Remark 6.
The consideration of the orientation of the end-effector is particularly important for practical applications. This is also a challenging problem for the study. To overcome it, a possible solution is provided below. By selecting multiple feature points, the concrete kinematic joint velocity controller can be designed in the following form:
q ˙ r = ( Λ ^ 1 T + 1 2 λ ^ 1 T Δ y 1 ) K Δ y 1 ( Λ ^ 2 T + 1 2 λ ^ 2 T Δ y 2 ) K Δ y 2 ( Λ ^ m T + 1 2 λ ^ m T Δ y m ) K Δ y m
where m is the number of the feature points marked on the end-effector. The adaptive law is designed as
θ ^ ˙ = Y 1 Z T ( q , y 1 , y 2 , , y m ) q ˙ + η
η i = if θ i < θ ^ i < θ ¯ i 0 or θ ^ i = θ i   and φ i 0 or θ ^ i = θ ¯ i and φ i 0 φ i otherwise
φ = Y 1 Z T ( q , y 1 , y 2 , , y m ) q ˙
The non-negative function of the image loop is given as
V ( t ) = 1 2 { z Δ y T Ky + Δ θ T Υ Δ θ }
Then, the derivation of this Lyapunov function is
V ˙ 1 ( t ) = 1 2 z ˙ ( t ) Δ y T K Δ y + z ( t ) Δ y T K Δ y ˙ + Δ θ T Υ θ ^ ˙
Similarly, we can obtain
V ˙ 1 ( t ) q ˙ T q d .
When the image errors are convergent to the equilibrium points, the joint position and the kinematic parameters estimates can be considered as the constant vectors. We denote the transformation matrix of the feature points as T . We can have
y i = 1 z Tp i Ξ T p i
The transformation matrix can be represented as six unknown variables. When the number of feature points is larger than 2   m and n 6 , the kinematic joint velocity controller can be realized as n nonlinear equations with unique solution, which represent only one set of image vectors that can satisfy it. If n nonlinear equations have more that one solution, the image errors may not be convergent to zero.

5. Simulation

The simulation of this study is based on Matlab-simulink (R2021b) and the robotic toolbox. Based on the above analysis, it is evident that only the spatial coordinates of the end-effector of the robotic arm are required for the entire control task, without consideration of its orientation. Therefore, in this simulation, a simplified three-degree-of-freedom (3-DOF) PUMA560 robot is utilized to verify the validity of the proposed scheme. The system configuration is the electrically driven PMUA560 robotic manipulator with an unmovable camera. The system frame is showcased in Figure 3, and the simulation system configuration is as shown in Figure 4.
The projection matrix of the camera is given as
Ξ = 750 0 0 362 168.3 0.7071 362 892.4 0.7071 281.9 506.9 0.9899 .
The robotic manipulator is described by the Denavit–Hartenberg (D-H) method, which includes the kinematic parameters of the robot. Combining the dynamic parameters, the robotic physical parameters are showcased in Table 2.
The parameters of the joint actuator DC motor are set to
L = 0.08 0 0 0 0.0805 0 0 0 0.0788 , R = 5 0 0 0 4.758 0 0 0 5.212 , N = 0.7 0 0 0 0.721 0 0 0 0.711 .
The parameter vector θ = [323.8500, 156.3283, −156.3283, 0, −72.6682, −385.3248, 0, 0.3053, −0.3053, 363.975, 175.6974, −175.6974, 0, −81.6718, −433.0666, 0, 0.3432, −0.3432, 97.1250, 46.884, −46.884, 0, −21.7937, −115.5618, 0, 0.0916, −0.0916]. The image error gain matrix K = 0.0015 × I 3 , the velocity error gain matrix K 1 = d i a g ( 40 , 55 , 45 ) , and the torque error gain matrix K 2 = d i a g ( 17 , 1.2 , 15 ) . The adaptive gain matrices Γ 1 = 114 × I n , Γ 2 = 1.75 × I n , B 1 = d i a g ( 0.5 , 0.3 , 0.45 ) , and B 2 = d i a g ( 4 , 7 , 0.3 ) , where n is the node number of the RBFNNs.
In this simulation, the simulation time is 2 s, and the convergence time of the system is less than 0.8 s. As shown in Figure 5, the image tracking converges to zero, demonstrating that the proposed control strategy is capable of accomplishing the primary control task. In Figure 6, the astringency of the joint angles q , angle velocity q ˙ , and actuator torque τ of the robotic manipulator is observed. As the conclusion in the stability analysis (67), the joint velocity q ˙ converges to zero. Further, we can observe that the tracking errors of the joint angular velocity of the robotic manipulator and the torque output of the motor also converge to zero in Figure 7. It can be observed that when the time reaches 0.6 s, all system errors converge to a minimal value, and the various states of the system begin to stabilize.
The track of the feature point projection on the image plane is shown in Figure 8. The trajectory of the feature points in three-dimensional space is depicted in Figure 9. It should be noted that in our control scheme, the reference joint velocity computed by the kinematic controller only guarantees the astringency of the image error, which means that the end-effector cannot reach the desired space position in the uncalibrated environment. To ensure that the end-effector reaches the desired position in three-dimensional space, it is necessary to select multiple non-collinear feature points on the end-effector. In the control system simulation, the variation trend of the adaptive parameters of the robust compensator Ω ^ 1 , Ω ^ 2 are recorded in Figure 10.
Furthermore, we propose the use of the Euclidean norm as a measure of accuracy. Represented by the norm, it directly indicates the convergence of multidimensional data in the experiment. Through computation, the norms of the image error, joint velocity error, and torque tracking are smaller than 1.2 (Pixel), 0.01 (rad/s), and 20 (N·m) after the convergence of the control system.
Through the above simulation result, the boundness and astringency of the system states and the tracking errors are showcased, which means that Theorem 1, presented in this paper, is verified. Under the uncertainties in the kinematics and dynamics, the proposed control scheme utilizes state and output feedback to regulate the input voltage of the manipulator joint actuators, thereby accomplishing the task of image position tracking.
Furthermore, for the robustness and anti-interference capabilities of the controller, Gaussian noise is introduced into both the torque controller and the voltage controller. Considering the range of the control signal, Gaussian noise with a mean of 0 and variances of 300 and 700 is introduced into the two controllers. Here, we present the simulation results, including the joint torque error Δ τ , joint velocity error Δ q , and image error Δ y , in Figure 11.
It can be observed that compared with the scenario without random disturbances, the control performance does not significantly degrade even with the introduction of significant disturbances. These results demonstrate that the proposed adaptive RBFNN controller exhibits robustness against system parameter uncertainties and random disturbances.
Remark 7.
The effectiveness of the proposed control scheme is mainly verified by theoretical analysis and simulation results. Presently, how to confirm it further by experimental results based on robotic dynamics is still difficult for us. Despite the difficulties, we have built a hardware system platform for the preparation of experimental tests. In our future study, we will first perform a practical test based on the kinematic model of the manipulator and then generalize the results to the dynamic model, so that the control scheme proposed in this work can be well validated by theoretical analysis, numerical simulation, and experimental tests simultaneously.

6. Conclusions

For the electrically driven robotic manipulator in an uncalibrated environment, we propose an adaptive RBF cascade visual servoing control scheme to solve the problem of the uncertainties from the camera and the manipulator. The proposed control policy does not require any prior knowledge of the robot dynamics; it only necessitates a rough measurement of the camera and robot kinematics. By dividing the system into three subsystems—the image loop, the joint velocity loop, and the torque loop—the uncertainties can be addressed separately within each subsystem. Additionally, an adaptive update algorithm and RBFNNs are employed to estimate and compensate for unknown camera parameters and uncertainties in kinematics and dynamics. Furthermore, corresponding adaptive robust mechanisms are introduced to ensure the stability of the system. Through Lyapunov theory, the global stability of the system with the proposed control scheme and the boundness and astringency of the system states and tracking error are proven. However, the proposed control scheme is primarily evaluated through theoretical analysis, rigorous stability proofs, and simulations as limited practical verification of its effectiveness.
Although the simulation results show that the image tracking errors under different initial values can converge to a residual around zero in finite time, how to prove this finite-time convergence property theoretically, i.e., establishing some Lyapunov-based conditional inequalities for ensuring finite-time or fixed-time stability, is still difficult for us, which will be regarded as a future research topic and the extension of this work.

Author Contributions

Methodology, Z.C.; Software, W.W.; Formal analysis, Z.C., W.W. and W.Y.; Investigation, W.W.; Data curation, W.Y.; Writing—original draft, Z.C.; Writing—review & editing, Z.C.; Visualization, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Special projects in key fields of colleges and universities in Guangdong Province, China (No. 2023ZDZX1078, No. 2022ZDZX1070), and the Tertiary Education Scientific research project of Guangzhou Municipal Education Bureau, China (No. 202235364), and the Research project of Guangzhou City Polytechnic. (No. KYTD2023004).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, J.; Jin, Z.; Liu, A.; Yu, L.; Yang, F. A survey of learning-based control of robotic visual servoing systems. J. Frankl. Inst. 2022, 359, 556–577. [Google Scholar] [CrossRef]
  2. Zhang, K.; Shi, Y.; Sheng, H. Robust nonlinear model predictive control based visual servoing of quadrotor UAVs. IEEE/ASME Trans. Mechatron. 2021, 26, 700–708. [Google Scholar] [CrossRef]
  3. Zhou, S.; Shen, C.; Pang, F.; Chen, Z.; Gu, J.; Zhu, S. Position-based visual servoing control for multi-joint hydraulic manipulator. J. Intell. Robot. Syst. 2022, 105, 33. [Google Scholar] [CrossRef]
  4. Xu, F.; Zhang, Y.; Sun, J.; Wang, H. Adaptive visual servoing shape control of a soft robot manipulator using bezier curve features. IEEE/ASME Trans. Mechatron. 2022, 28, 945–955. [Google Scholar] [CrossRef]
  5. Chen, Y.; Wu, Y.; Zhang, Z.; Miao, Z.; Zhong, H.; Zhang, H.; Wang, Y. Image-based visual servoing of unmanned aerial manipulators for tracking and grasping a moving target. IEEE Trans. Ind. Inform. 2022, 19, 8889–8899. [Google Scholar] [CrossRef]
  6. Huang, H.; Bian, X.; Cai, F.; Li, J.; Jiang, T.; Zhang, Z.; Sun, C. A review on visual servoing for underwater vehicle manipulation systems automatic control and case study. Ocean Eng. 2022, 260, 112065. [Google Scholar] [CrossRef]
  7. Ribeiro, E.G.; Mendes, R.Q.; Terra, M.H.; Grassi, V. Second-order position-based visual servoing of a robot manipulator. IEEE Robot. Autom. Lett. 2023, 9, 207–214. [Google Scholar] [CrossRef]
  8. Dong, G.; Zhu, Z. Position-based visual servo control of autonomous robotic manipulators. Acta Astronaut. 2015, 115, 291–302. [Google Scholar] [CrossRef]
  9. Thuilot, B.; Martinet, P.; Cordesses, L.; Gallice, J. Position based visual servoing: Keeping the object in the field of vision. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 2, pp. 1624–1629. [Google Scholar]
  10. Corke, P.I.; Hutchinson, S.A. A new partitioned approach to image-based visual servo control. IEEE Trans. Robot. Autom. 2001, 17, 507–515. [Google Scholar] [CrossRef]
  11. Chaumette, F. Image moments: A general and useful set of features for visual servoing. IEEE Trans. Robot. 2004, 20, 713–723. [Google Scholar] [CrossRef]
  12. Sanderson, A.C.; Weiss, L.E. Image-based visual servo control of robots. Robotics and industrial inspection. In Proceedings of the 6th Annual Technical Symposium—SPIE, San Diego, CA, USA, 24–27 August 1982; Volume 360, pp. 164–169. [Google Scholar]
  13. Hamel, T.; Mahony, R. Image based visual servo control for a class of aerial robotic systems. Automatica 2007, 43, 1975–1983. [Google Scholar] [CrossRef]
  14. Li, W.; Xiong, R. A hybrid visual servo control method for simultaneously controlling a nonholonomic mobile and a manipulator. Front. Inf. Technol. Electron. Eng. 2021, 22, 141–154. [Google Scholar] [CrossRef]
  15. Corke, P.; Hutchinson, S. A new hybrid image-based visual servo control scheme. In Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No. 00CH37187), Sydney, NSW, Australia, 12–15 December 2000; Volume 3, pp. 2521–2526. [Google Scholar]
  16. Hosoda, K.; Igarashi, K.; Asada, M. Adaptive hybrid visual servoing/force control in unknown environment. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS’96, Osaka, Japan, 4–8 November 1996; IEEE: Piscataway, NJ, USA, 1996; Volume 3, pp. 1097–1103. [Google Scholar]
  17. Ceren, Z.; Altuğ, E. Image based and hybrid visual servo control of an unmanned aerial vehicle. J. Intell. Robot. Syst. 2012, 65, 325–344. [Google Scholar] [CrossRef]
  18. Bo, T.; Zeyu, G.; Han, D. Survey on uncalibrated robot visual servoing control. Chin. J. Theor. Appl. Mech. 2016, 48, 767–783. [Google Scholar]
  19. Malis, E.; Chaumette, F. 2 1/2 D visual servoing with respect to unknown objects through a new estimation scheme of camera displacement. Int. J. Comput. Vis. 2000, 37, 79–97. [Google Scholar] [CrossRef]
  20. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 1, pp. 666–673. [Google Scholar]
  21. Kelly, R. Robust asymptotically stable visual servoing of planar robots. IEEE Trans. Robot. Autom. 1996, 12, 759–766. [Google Scholar] [CrossRef]
  22. Sato, T.; Sato, J. Visual servoing from uncalibrated cameras for uncalibrated robots. Syst. Comput. Jpn. 2000, 31, 11–19. [Google Scholar] [CrossRef]
  23. Shen, Y.; Xiang, G.; Liu, Y.H.; Li, K. Uncalibrated visual servoing of planar robots. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 1, pp. 580–585. [Google Scholar]
  24. Liu, Y.H.; Wang, H.; Wang, C.; Lam, K.K. Uncalibrated visual servoing of robots using a depth-independent interaction matrix. IEEE Trans. Robot. 2006, 22, 804–817. [Google Scholar]
  25. Gong, Z.; Tao, B.; Yang, H.; Yin, Z.; Ding, H. Projective Homography Based Uncalibrated Visual Servoing. In Proceedings of the 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Kaiulani, HI, USA, 31 July–4 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 364–369. [Google Scholar]
  26. Wu, J.; Jin, Z.; Liu, A.; Yu, L. Vision-based neural predictive tracking control for multi-manipulator systems with parametric uncertainty. ISA Trans. 2021, 110, 247–257. [Google Scholar] [CrossRef]
  27. Lee, J.; Dallali, H.; Jin, M.; Caldwell, D.G.; Tsagarakis, N.G. Robust and adaptive dynamic controller for fully-actuated robots in operational space under uncertainties. Auton. Robot. 2019, 43, 1023–1040. [Google Scholar] [CrossRef]
  28. Hu, J.; Wang, P.; Xu, C.; Zhou, H.; Yao, J. High accuracy adaptive motion control for a robotic manipulator with model uncertainties based on multilayer neural network. Asian J. Control 2022, 24, 1503–1514. [Google Scholar] [CrossRef]
  29. Lai, G.; Liu, A.; Yang, W.; Chen, Y.; Zhao, L. Uncalibrated Adaptive Visual Servoing of Robotic Manipulators with Uncertainties in Kinematics and Dynamics. Actuators 2023, 12, 143. [Google Scholar] [CrossRef]
  30. Yang, J.; Chen, W.H.; Li, S.; Guo, L.; Yan, Y. Disturbance/Uncertainty Estimation and Attenuation Techniques in PMSM Drives—A Survey. IEEE Trans. Ind. Electron. 2017, 64, 3273–3285. [Google Scholar]
  31. Yuan, Z.; Wang, W.; Chen, J.; Razmjooy, N. Interval linear quadratic regulator and its application for speed control of DC motor in the presence of uncertainties. ISA Trans. 2022, 125, 252–259. [Google Scholar] [PubMed]
  32. Izadbakhsh, A.; Khorashadizadeh, S. Robust task-space control of robot manipulators using differential equations for uncertainty estimation. Robotica 2017, 35, 1923–1938. [Google Scholar] [CrossRef]
  33. Léchappé, V.; Rouquet, S.; González, A.; Plestan, F.; León, J.D.; Moulay, E.; Glumineau, A. Delay Estimation and Predictive Control of Uncertain Systems With Input Delay: Application to a DC Motor. IEEE Trans. Ind. Electron. 2016, 63, 5849–5857. [Google Scholar] [CrossRef]
  34. Zhou, B.; Yang, L.; Wang, C.; Chen, Y.; Chen, K. Inverse jacobian adaptive tracking control of robot manipulators with kinematic, dynamic, and actuator uncertainties. Complexity 2020, 2020, 5070354. [Google Scholar]
  35. Liang, X.; Wang, H.; Chen, W. Uncalibrated fixed-camera visual servoing of robot manipulators by considering the motor dynamics. In Proceedings of the 2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Hamburg, Germany, 13–15 September 2012; pp. 426–431. [Google Scholar]
  36. Fateh, M.M.; Khorashadizadeh, S. Robust control of electrically driven robots by adaptive fuzzy estimation of uncertainty. Nonlinear Dyn. 2012, 69, 1465–1477. [Google Scholar]
  37. Peng, J.; Ding, S.; Dubay, R. Adaptive composite neural network disturbance observer-based dynamic surface control for electrically driven robotic manipulators. Neural Comput. Appl. 2021, 33, 6197–6211. [Google Scholar] [CrossRef]
Figure 1. The visual servoing control system frame of the electrically driven robotic manipulators.
Figure 1. The visual servoing control system frame of the electrically driven robotic manipulators.
Mathematics 12 02413 g001
Figure 2. The simple control scheme block diagram for the closed-loop system.
Figure 2. The simple control scheme block diagram for the closed-loop system.
Mathematics 12 02413 g002
Figure 3. The control scheme frame of a visual servoing electrically driven robotic manipulator. The different closed-loop subsystem is highlighted in a different color. The adaptive parameter update rule is served for the joint velocity controller, and the RBFNNs mechanism and robust adaptive mechanism are embedded in the velocity loop and torque loop.
Figure 3. The control scheme frame of a visual servoing electrically driven robotic manipulator. The different closed-loop subsystem is highlighted in a different color. The adaptive parameter update rule is served for the joint velocity controller, and the RBFNNs mechanism and robust adaptive mechanism are embedded in the velocity loop and torque loop.
Mathematics 12 02413 g003
Figure 4. The simulation configuration of the robotic manipulator visual system.
Figure 4. The simulation configuration of the robotic manipulator visual system.
Mathematics 12 02413 g004
Figure 5. Image tracking error Δ y .
Figure 5. Image tracking error Δ y .
Mathematics 12 02413 g005
Figure 6. The joint angles q , angle velocity q ˙ , and actuator torque τ of the robotic manipulator.
Figure 6. The joint angles q , angle velocity q ˙ , and actuator torque τ of the robotic manipulator.
Mathematics 12 02413 g006
Figure 7. The angle velocity tracking error Δ q ˙ and torque tracking error Δ τ ˙ of the robotic manipulator.
Figure 7. The angle velocity tracking error Δ q ˙ and torque tracking error Δ τ ˙ of the robotic manipulator.
Mathematics 12 02413 g007
Figure 8. The 2D track of the feature point projection on the image plane.
Figure 8. The 2D track of the feature point projection on the image plane.
Mathematics 12 02413 g008
Figure 9. The 3D track of the feature point.
Figure 9. The 3D track of the feature point.
Mathematics 12 02413 g009
Figure 10. The adaptive robust parameters Ω ^ 1 , Ω ^ 2 .
Figure 10. The adaptive robust parameters Ω ^ 1 , Ω ^ 2 .
Mathematics 12 02413 g010
Figure 11. The joint torque error Δ τ , joint velocity error Δ q , and image error Δ y of the robotic manipulator under the unknown disturbance.
Figure 11. The joint torque error Δ τ , joint velocity error Δ q , and image error Δ y of the robotic manipulator under the unknown disturbance.
Mathematics 12 02413 g011
Table 1. Nomenclature table.
Table 1. Nomenclature table.
SymbolImplication
p The spatial position vector
q The joint anngle vector
T The homogeneous transformation matrix
y The image coordinate vector
Ξ The perspective projection matrix
ξ i The ith row of the perspective projection matrix
J The Jacobian matrix
A The depth-independent interaction matrix
Λ The depth-independent compound matrix
λ i The depth-independent compound vector
M The inertia force matrix of the robotic manipulator
C The Coriolis force matrix of the robotic manipulator
g The gravitational force vector of the robotic manipulator
τ The joint torque vector
L The motor armature inductance matrix
R The motor resistance matrix
N The motor electromechanical conversion coefficient matrix
H The RBFNN approximate term
W The output weight matrix of the RBFNN
Ψ The RBF nodes of the neural network
d The unknown disturbance term
δ The estimation error of the RBFNN
ω The summary of the estimation error and unknown disturbance
Ω The upper bound of | ω |
p The compound parameter vector of the camera and manipulator
q The energy-like Lyapunov function
zThe depth value with respect to the camera frame
Table 2. Parameters of the manipulator.
Table 2. Parameters of the manipulator.
Joint Angle (rad) Offset (m) Length (m) Twist (rad) Mass (kg)
1 q 1 00 π / 2 0
2 q 2 00.43180 17.4
3 q 3 0.150050.0203 π / 2 4.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Z.; Wen, W.; Yang, W. Adaptive Visual Control for Robotic Manipulators with Consideration of Rigid-Body Dynamics and Joint-Motor Dynamics. Mathematics 2024, 12, 2413. https://doi.org/10.3390/math12152413

AMA Style

Chen Z, Wen W, Yang W. Adaptive Visual Control for Robotic Manipulators with Consideration of Rigid-Body Dynamics and Joint-Motor Dynamics. Mathematics. 2024; 12(15):2413. https://doi.org/10.3390/math12152413

Chicago/Turabian Style

Chen, Zhitian, Weijian Wen, and Weijun Yang. 2024. "Adaptive Visual Control for Robotic Manipulators with Consideration of Rigid-Body Dynamics and Joint-Motor Dynamics" Mathematics 12, no. 15: 2413. https://doi.org/10.3390/math12152413

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop