Next Article in Journal
The Hybrid of Multilayer Perceptrons: A New Geostatistical Tool to Generate High-Resolution Climate Maps in Developing Countries
Next Article in Special Issue
Continuous-Time Subspace Identification with Prior Information Using Generalized Orthonormal Basis Functions
Previous Article in Journal
Ternary Hybrid Nanofluid Flow Containing Gyrotactic Microorganisms over Three Different Geometries with Cattaneo–Christov Model
Previous Article in Special Issue
Overview of Identification Methods of Autoregressive Model in Presence of Additive Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Linear Time-Invariant Systems: A Least Squares of Orthogonal Distances Approach

by
Luis Alberto Cantera-Cantera
1,
Rubén Garrido
2,
Luis Luna
3,*,
Cristóbal Vargas-Jarillo
2 and
Erick Asiain
3
1
Automation and Control Engineering Department, Instituto Politécnico Nacional, Unidad Profesional Adolfo López Mateos, Zacatenco, Av. Luis Enrique Erro s/n, Mexico City 07738, Mexico
2
Automatic Control Department, CINVESTAV-IPN, Av. Instituto Politecnico Nacional 2508, Col. San Pedro Zacatenco, Mexico City 07360, Mexico
3
Centro de Estudios Científicos y Tecnológicos No. 9, Instituto Politécnico Nacional, Mexico City 11400, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(5), 1238; https://doi.org/10.3390/math11051238
Submission received: 26 January 2023 / Revised: 1 March 2023 / Accepted: 1 March 2023 / Published: 3 March 2023
(This article belongs to the Special Issue New Trends on Identification of Dynamic Systems)

Abstract

:
This work describes the parameter identification of servo systems using the least squares of orthogonal distances method. The parameter identification problem was reconsidered as data fitting to a plane, which in turn corresponds to a nonlinear minimization problem. Three models of a servo system, having one, two, and three parameters, were experimentally identified using both the classic least squares and the least squares of orthogonal distances. The models with two and three parameters were identified through numerical routines. The servo system model with a single parameter only considered the input gain. In this particular case, the analytical conditions for finding the critical points and for determining the existence of a minimum were presented, and the estimate of the input gain was obtained by solving a simple quadratic equation whose coefficients depended on measured data. The results showed that as opposed to the least squares method, the least squares of orthogonal distances method experimentally produced consistent estimates without regard for the classic persistency-of-excitation condition. Moreover, the parameter estimates of the least squares of orthogonal distances method produced the best tracking performance when they were used to compute a trajectory-tracking controller.

1. Introduction

The aims of a mathematical model are to make forecasts and deductions about the behavior of the system that it is attempting to describe, and then, it must accurately reproduce its measured variables. However, since the mathematical model is developed based on empirical knowledge and physical laws, most of the parameters used to build the model may be unknown. Consequently, it is necessary to determine the values of the parameters to produce the best fit between the signals produced by the model and the measured data. This procedure is called parameter identification or parameter estimation, and it corresponds to a linear estimation if a model is a linear combination of linearly independent functions; otherwise, the estimation is nonlinear in nature [1,2,3,4,5,6].
A widely used approach to solve parameter identification problems and curve fitting is the least squares (LS) method. It was attributed to Gauss, but the first published work on this subject was by Legendre at the beginning of the 19th century [7]. The seminal papers of R. J. Adcock in 1877 [8] and 1878 [9] proposed a variant of the LS method, namely the least squares of the orthogonal distances (LSOD).
In order to appreciate the differences between the LS and the LSOD algorithms, consider Figure 1, where it is desired to fit a set of experimental data to a straight line. The standard LS algorithm minimized the sum of the squares of the vertical distances between the experimental data and the straight line, and consequently, it only considered errors in the y variable. In contrast, the LSOD algorithm minimizes the sum of the squares of the orthogonal distances between the experimental data and the straight line; hence, it accounted for errors in both the x and y variables.
The LSOD method has been mainly used for fitting data to geometric objects, including lines, planes, spheres, ellipses, and hyperbolas [7,10,11,12]. It has also been applied in several problems, including parameter identification of tire models [13] and the modeling of sorption data [14]. It has been reported that LSOD method outperformed the classic LS method when they were used for model-fitting of experimental data [15]. However, parameter identification of dynamical systems is a classic subject, as it has been covered in many references and textbooks, and the LS algorithm has probably been the most widely used method to this end [2,3,4,5,6,16,17,18,19].
A practical case of interest has been the parameter identification in servo systems, which are used in many applications, including motion control, robotics, solar-tracking systems, and prosthetics. However, advanced control algorithms for servo systems require previous knowledge of their parameters. This makes it necessary to use the most suitable method for parameter identification. The parameter identification of servo systems may be solved through different techniques, for example, employing the classical least squares (LS) method [16,20,21,22,23]. Other techniques include Kalman filtering [24], gradient algorithms [25], particle swarm optimization (PSO) [26], total least squares (TLS), and the LSOD method [27].
This work proposes a parameter identification procedure for servo systems using the LSOD method, which recasts the parameter identification problem as data fitting to a plane. Then, the parameter identification could be accomplished by solving a nonlinear minimization problem for which the cost function and the conditions for finding a critical point were described. Then, finding the critical points was equivalent to obtaining the parameter estimates. Three models of the servo systems, having one, two, and three parameters, were considered for parameter identification. The two and three parameter models were solved using numerical routines, whereas for the model with one parameter, which corresponded to the input gain, the parameter identification problem was equivalent to the data fitting to a line, and analytical conditions for obtaining a critical point and to determine if it corresponded to a minimum were derived. Moreover, the estimate of the input gain was obtained by solving a simple quadratic equation whose coefficients depended on measured data. To the best of the authors’ knowledge, this last result had no equivalent when using the LS algorithm.
The key points of this work were the following:
  • We presented a parameter identification scheme using the LSOD method for servo systems.
  • It was easily extended for the parameter identification of n-order LTI systems subjected to constant disturbances, assuming that only input and output data were available from measurements.
  • A cost function depending on the orthogonal distances between the data and a plane was derived, where the latter was built from a linear parametrization.
  • Conditions for obtaining a critical point, i.e., the parameter estimates, as well as the numerical routine to solve the nonlinear minimization problem were described.
  • For the model of a servo system containing only an unknown parameter, the input gain, the simple analytical conditions for computing the critical point and to establish if the point produced a minimum of the cost function were derived. The LSOD method provides the possibility of using smooth excitation signals with low harmonic content, which are more suitable for applications where aggressive excitation signals cannot be used.
  • The proposed parameter identification methodology was experimentally compared with the LS method. Three servo system models were estimated using excitation signals with different frequency spectra, and the parameter estimates obtained with both methods were used to design a trajectory-tracking controller.
The work is structured as follows: Section 2 describes the linear regression for perturbed LTI systems, assuming that only input and output data are available. The LS method is briefly described in Section 3. Section 4 is devoted to the LSOD method. The three models of a servo system are presented in Section 5. The parameter estimates obtained through the LS and LSOD methods using experimental data and the design and testing of three trajectory-tracking controllers are presented, respectively, in Section 6 and Section 7. The main conclusions of this work are described in Section 8.

2. Linear Regression for Perturbed LTI Systems

The general problem of parameter identification was defined as follows. Given the structure of a model and a set of measured data points, the problem consisted of estimating the unknown model parameters, so the output of the model computed using the estimates would match the data in an optimal manner [2].
Let the mathematical model of a linear-time invariant (LTI) system be as follows:
A ( D ) y = B ( D ) u + d ; D i = d i d t i A ( D ) = D n + α 1 D n 1 + + α n ; n 1 B ( D ) = β 0 D m + β 1 D m 1 + + β m n m 0
where it is subjected to a constant disturbance d, with input u, and output y. The aim was to estimate the constant unknown parameters α 1 , , α n , β 0 , β 1 , , β m and the disturbance d. A linear regression for parameter identification purposes was obtained by rewriting (1) as follows:
y ( n ) = α 1 y ( n 1 ) α n y + β 0 u ( m ) + β 1 u ( m 1 ) + + β m u + d ,
= θ Φ ,                                                                                                                                                                                    
where
θ = α 1 , , α n , β 0 , β 1 , , β m , d , Φ = y ( n 1 ) , , y , u ( m ) , u ( m 1 ) , , u , 1 .
Note that the regression vector Φ depends on the system input and output and their time derivatives. In practice, these derivatives are seldom available, thus precluding the use of (2).
If only the input u and the output y are available from measurements, it is current practice to use linear filters to obtain a linear regression for the unknown parameters but with a regression vector, depending only on input–output measurements [17,18]. To this end, we defined the transfer function of the following asymptotically stable filter:
F ( s ) = f n s n + f 1 s n 1 + + f n ,
and the filtered variables u f and y f ,
L { u f } = F ( s ) L { u } ,
L { y f } = F ( s ) L { y } ,
where L { · } stands for the Laplace transform and s is a complex variable.
The expressions (5) and (6) found the next state space representations:
z ˙ 1 = A f z 1 + b f u , u f = c f z 1 ,
z ˙ 2 = A f z 2 + b f y , y f = c f z 2 ,
where
A f = f 1 1 0 0 f n 1 0 0 1 f n 0 0 0 ; b f = 0 0 f n ; c f = 1 0 0 , F ( s ) = c f ( s I A f ) 1 b f ,
with f i > 0 , i = 1 , , n and det ( A f ) = ( 1 ) n + 2 f n . Hence, A f is invertible and Hurwitz-stable.
However, define
z 3 = A ( D ) z 2 B ( D ) z 1 ,
taking the time-derivative of (10) and substituting (7) and (8) produce
z ˙ 3 = A ( D ) z ˙ 2 B ( D ) z ˙ 1 , = A ( D ) ( A f z 2 + b f y ) B ( D ) ( A f z 1 + b f u ) , = A f A ( D ) z 2 B ( D ) z 1 + b f A ( D ) y B ( D ) u ,
which results in
z ˙ 3 = A f z 3 + b f d ,
by substituting Equations (1) and (10). The solution of Equation (12) for a constant disturbance d was shown in [28].
z 3 = e A f t z 3 ( 0 ) + A f 1 ( e A f t I ) b f d .
Substituting Equation (10), multiplying both sides of Equation (13) by c f , substituting u f and y f in Equation (7) and Equation (8), respectively, and, assuming zero initial conditions, finally yielded
A ( D ) y f = B ( D ) u f + d f ,
with d f = c f A f 1 ( e A f t I ) b f d .
The term d f was decomposed as d f = d f t + d f s with d f t = c f A f 1 e A f t b f d and d f s = c f A f 1 b f d . The term d f t exponentially converged to zero because A f was Hurwitz-stable. However, d f s corresponded to the steady-state value of d f , which was computed using Equations (4) and (9), and the final value theorem in [28], i.e.,
d f s = lim s 0 s F ( s ) d s = d , = lim s 0 s c f ( s I A f ) 1 b f d s = c f A f 1 b f d .
The results above showed that c f A f 1 b f = 1 and d f s = d , which allowed us to express Equation (14) as follows:
A ( D ) y f = B ( D ) u f + d + d f t .
A linear regression followed from Equation (15)
y f ( n ) = α 1 y f ( n 1 ) α n y f + β 0 u f ( m ) + β 1 u f ( m 1 ) + + β m u f + d + d f t ,
= θ Φ f + d f t ,                                                                                                                                                                                    
where
θ = α 1 , , α n , β 0 , β 1 , , β m , d , Φ f = y f ( n 1 ) , , y f , u f ( m ) , u f ( m 1 ) , , u f , 1 .
Since the term d f t decayed exponentially to zero, it was not considered in subsequent developments. Equations (3) and (17) have similar structures, since vector Φ f only depends on u f and y f , all entries as well as y f ( n ) are obtained through the next filters due to having the same characteristic polynomial as Equation (4).
L { u f ( i ) } L { u } = f n s i s n + f 1 s n 1 + + f n ; i = 0 , , m ,
L { y f ( i ) } L { y } = f n s i s n + f 1 s n 1 + + f n ; i = 0 , , n .

3. Parameter Identification Using the LS Method

Consider the model (16) with the following unknown parameters: α 1 , , α n , β 0 , β 1 , , β m , d . If the ρ measurements of the variables y f j ( n ) , y f j ( n 1 ) , , y ˙ f j , y f j , u f j ( m ) , u f j ( m 1 ) , , u ˙ f j , u f j with j = 1 , , ρ , are available, then, the equation to estimate the unknown parameters is the follows:
r ¯ = y Ap ,
where
r ¯ = r ¯ 1 r ¯ 2 r ¯ ρ , y = y f 1 ( n ) y f 2 ( n ) y f ρ ( n ) , A = y f 1 ( n 1 ) y f 1 u f 1 ( m ) u f 1 1 y f 2 ( n 1 ) y f 2 u f 2 ( m ) u f 2 1 y f ρ ( n 1 ) y f ρ u f ρ ( m ) u f ρ 1 ,
and
p = α 1 α n β 0 β m d .
In the standard LS method, the goal is to compute p R κ , κ = n + m + 2 , such that
J L S = min p R κ y Ap 2 .
where the operator · stands for the Euclidean norm. The value p minimizing J L S was provided by [19].
p = A y
A = ( A A ) 1 A = V Σ q 1 0 0 0 U
The term A stood for the Moore–Penrose pseudo-inverse of A , which was defined by the Singular value decomposition (SVD), whose properties are described in the next theorem.
Theorem 1
([29]). Let any matrix A R m × n . The SVD of A is denoted by the following:
A = U Σ q 0 0 0 V ,
where U R m × m and V R n × n are orthogonal matrices, and Σ q = diag ( σ 1 , σ 2 , , σ q ) . The numbers σ 1 σ 2 σ q > 0 are called the singular values of A .

4. Parameter Identification Using the LSOD Method

In order to apply the LSOD method, the model (16) has been rewritten, as follows:
x 1 + α 1 x 2 + + α n x k + β 0 x k + 1 + β 1 x k + 2 + + β m x η d = 0 ,
with x 1 = y f ( n ) , x 2 = y f ( n 1 ) , , x k = y f , x k + 1 = u f ( m ) , x k + 2 = u f ( m 1 ) , , x η = u f and unknown parameters α 1 , , α n , β 0 , β 1 , , β m , d , which corresponds to the equation describing a hyperplane.
Note that the LSOD method minimizes the sum of the squares of the orthogonal distances from the experimental data points to an approximation function. Then, the parameter identification of an LTI system, parameterized according to Equation (24), and the application of the LSOD method is equivalent to solve a hyperplane-fitting problem. To illustrate this point, Figure 2 shows the orthogonal and vertical distances used by the LSOD and LS methods, respectively, corresponding to the plane fitting in R 3 .

4.1. The LSOD Method for Plane Fitting

Let the plane π η R η be defined as follows:
π η : a 1 x 1 + a 2 x 2 + a 3 x 3 + + a η x η + a 0 = 0 ,
with the normal vector N = a 1 a 2 a 3 a η , and let the experimental data vector Q i = x 1 i , x 2 i , , x η i with i = 1 , 2 , , ρ . Then, the orthogonal distance d i between Q i and a point on the plane π η could be defined as follows:
Definition 1.
The orthogonal distance d i from the point Q i to the plane π η is the distance from Q i to the intersection point P i between the plane π η and the straight line L i , which is defined along N and passes through Q i .
In the above definition, P i is a vector on the plane π η . Therefore, the square of the orthogonal distance between Q i and P i was the following:
d i 2 = Q i P i 2
and the straight line L i was defined as the following:
L i = Q i + ζ N : ζ R .
The intersection between π η and L i was expressed as follows:
π η L i = ( x 1 i + a 1 ζ , , x η i + a η ζ ) a 1 ( x 1 i + a 1 ζ ) + + a η ( x η i + a η ζ ) + a 0 = 0 ,
and the intersection point was defined by P i = ( x 1 i + a 1 ζ , x 2 i + a 2 ζ , , x η i + a η ζ ) with
ζ = a 1 x 1 i + a 2 x 2 i + + a η x η i + a 0 a 1 2 + a 2 2 + a 3 2 + + a η 2 .
Therefore, substituting Q i and P i into Equation (26) yielded the following:
d i 2 = ( a 1 x 1 i + a 2 x 2 i + + a η x η i + a 0 ) 2 a 1 2 + a 2 2 + a 3 2 + + a η 2 = r i 2 a 1 2 + a 2 2 + a 3 2 + + a η 2 ,
where r i = a 1 x 1 i + a 2 x 2 i + + a η x η i + a 0 .
Now, consider the vector of orthogonal distances d = d 1 d 2 d ρ . Then, the cost function to be minimized using the LSOD method would be the following:
J L S O D = d 2 = 1 a 1 2 + a 2 2 + a 3 2 + + a η 2 i = 1 ρ r i 2 .
Therefore, to estimate the parameters a 0 , a 1 , , a η of the plane (25) π η R η using the LSOD method, the function (28) must be minimized. Hence, setting the first derivatives of (28) to zero, with respect to a 0 , a 1 , , a η , yielded:
J L S O D a 1 = 2 a 1 2 + a 2 2 + a 3 2 + + a η 2 i = 1 ρ r i x 1 i a 1 a 1 2 + a 2 2 + a 3 2 + + a η 2 i = 1 ρ r i 2 = 0 J L S O D a η = 2 a 1 2 + a 2 2 + a 3 2 + + a η 2 i = 1 ρ r i x η i a η a 1 2 + a 2 2 + a 3 2 + + a η 2 i = 1 ρ r i 2 = 0 J L S O D a 0 = 2 a 1 2 + a 2 2 + a 3 2 + + a η 2 i = 1 ρ r i = 0 .
Since the expression (29) become to a set of nonlinear algebraic equations, they must be solved for a 0 , a 1 , , a η in order to find a critical point and, then, to obtain a minimum for the cost function (28). From the above, it was clear that the parameter identification problem solved through the LSOD method turns to a nonlinear minimization problem. Regarding this issue, there are several methods to solve this problem, including the Newton method [2,29,30]. The MATLAB routine fminunc is based on the Newton method [31], which was used in this work to solve the set (29).
Finally, the cost function to be minimized in order to estimate the unknown parameters of an LTI system, parameterized according to (24) by the LSOD method, was provided by the following:
J L S O D = d 2 = 1 1 + α 1 2 + + α n 2 + β 0 2 + β 1 2 + + β m 2 i = 1 ρ r i 2 ,
with r i = x 1 i + α 1 x 2 i + + α n x k i + β 0 x k + 1 , i + β 1 x k + 2 , i + + β m x η i d .

4.2. The LSOD Method for a Straight Line

A particular case of the LTI system model (1) was the following:
y ¨ = β 0 u + d .
Its corresponding filtered version was the following:
y ¨ f = β 0 u f + d .
The goal was to estimate β 0 using the LSOD method according to y ¨ f and u f measurements. To obtain a suitable analytical solution, instead of resorting to numerical methods, the parameter estimation developed in the next paragraphs only considered the estimation of β 0 and did not consider the disturbance d. Therefore, the problem was reduced to the data fitting to a straight line, passing through the origin, as defined by the following:
= ( x 1 , x 2 ) R 2 x 1 + β 0 x 2 = 0 ,
with x 1 = y ¨ f and x 2 = u f . Hence, applying the LSOD method to estimate the unknown parameters of system (31) yielded the cost function:
J L S O D = d 2 = 1 1 + β 0 2 i = 1 ρ r i 2 ,
r i = x 1 i + β 0 x 2 i ,
which was a special case of (30), and the estimation of the input gain β 0 was also a nonlinear problem. Nevertheless, its simplicity allowed the first and second derivatives of (34) to be explicitly found, with respect to β 0 .
The first derivative of Equation (34), with respect to β 0 , was given by the following:
J L S O D ( β 0 ) : = J L S O D β 0 = 2 1 + β 0 2 i = 1 ρ r i x 2 i + β 0 1 + β 0 2 i = 1 ρ r i 2 .
Equating (36) to zero and solving for β 0 led to a second-degree equation:
μ 1 β 0 2 + μ 2 β 0 + μ 3 = 0 ,
with
μ 1 = i = 1 ρ x 1 i x 2 i = x 1 x 2 , μ 2 = i = 1 ρ x 1 i 2 i = 1 ρ x 2 i 2 = x 1 2 x 2 2 , μ 3 = i = 1 ρ x 1 i x 2 i = μ 1 ,
where x 1 = [ x 11 , , x 1 ρ ] and x 2 = [ x 21 , , x 2 ρ ] are the measurement vectors.
The solutions of Equation (37), which corresponds to the critical points, were provided by the following:
β 0 1 , 2 = μ 2 ± μ 2 2 4 μ 1 μ 3 2 μ 1 , = μ 2 2 μ 1 ± μ 2 2 4 μ 1 2 + 1 , = x 1 2 x 2 2 2 ( x 1 x 2 ) ± ( x 1 2 x 2 2 ) 2 4 ( x 1 x 2 ) 2 + 1 ,
with the condition
x 1 x 2 0 .
Then, the solutions β 0 1 , 2 are always real, and they exist as long as the measurement vectors x 1 and x 2 are not orthogonal. To verify that one of these solutions corresponds to a minimum in (34), the second derivative test was employed.
Definition 2.
Suppose J L S O D is continuous near the critical point β 0
  • If J L S O D ( β 0 ) = 0 and J L S O D ( β 0 ) > 0 , then J L S O D has a local minimum at β 0 .
  • If J L S O D ( β 0 ) = 0 and J L S O D ( β 0 ) < 0 , then J L S O D has a local maximum at β 0 .
The second derivative of (34), with respect to β 0 , was expressed, as follows:
J L S O D ( β 0 ) : = 2 J L S O D β 0 2 = 2 1 + β 0 2 i = 1 ρ x 2 i 2 4 β 0 1 + β 0 2 i = 1 ρ r i x 2 i 1 3 β 0 2 ( 1 + β 0 2 ) 2 i = 1 ρ r i 2 .
Equating the right-hand side of (36) to zero produced the following:
i = 1 ρ r i x 2 i = β 0 1 + β 0 2 i = 1 ρ r i 2 .
Substituting (41) into (40) and using (34) produced:
J L S O D = 2 1 + β 0 2 i = 1 ρ x 2 i 2 1 1 + β 0 2 i = 1 ρ r i 2 = 2 1 + β 0 2 x 2 2 J L S O D .
Therefore, a critical point β 0 of (34) corresponds to a minimum if
x 2 2 J L S O D ( β 0 ) > 0
where J L S O D ( β 0 ) denotes the cost function evaluated at β 0 .

5. Servo System Mathematical Models and Parameter Identification Procedure

Servo systems are widely used in CNC machines, industrial processes, assembly robots, electric elevators, electric and hybrid vehicles, trains, and many other areas. Although there has not been a general mathematical model for servo systems, most share similar structures, and certain parameters, such as the input gain due to the power amplifier driving the motor, have more influence in the servo system performance. Moreover, if the requirements of its performance increase, so does the complexity of the mathematical model, which then requires more elaborate control algorithms.
Several key parameters had to be considered when designing servo control systems, including friction phenomena, internal and external disturbances, load changes, un-modeled dynamics, and measurement noise [32,33]. In this work, three servo system models were presented for parameter identification purposes. The first model considered the servo system input gain, a friction term, and a constant disturbance, as the parameters to be estimated. The second model was built by ignoring the disturbance in the first model, and the third model only considered the estimation of the input gain. Moreover, the relevance of the parameter estimates was also assessed by designing feedback control laws for trajectory tracking.
Thus, having several models allows a practitioner to choose one of them for a specific task. For example, some control algorithms, including active disturbance rejection control (ADRC) [34,35], only need the input gain for their implementation, and then, it is only necessary to identify this term, and the model considering only the input gain would be more than adequate for this task.

5.1. First Mathematical Model

Figure 3 depicts a block diagram of a servo system model, which consists of a direct current (DC) motor driven by a power amplifier, which works in current mode though a proportional current controller. The variable y corresponds to the servo angular position. Signal u is the input control signal, K A is the power amplifier gain, K p is the proportional gain of the current controller, the constant K c l is the current loop feedback gain, and the constant K H B is the gain of the H-Bridge in the power amplifier. The variable ν is the motor input voltage, the constants R A , L A , K τ , and K E M F are respectively, the armature resistance, the armature inductance, the torque constant, and the back electromotive force constant. The parameters J and B are the lumped DC motor and load inertia and viscous friction coefficient respectively. The term τ d corresponds to constant disturbances that include parasitic constant voltages produced inside the power amplifier. The task of the current loop is to keep the armature current i a proportional to the control signal u, and to speed up the electrical dynamics of the servo system. Due to this feature, the electrical dynamics could have been overlooked at low frequencies.
Thus, by ignoring the electrical dynamics of the servo system and assuming that the external disturbances were constant, we obtained the following model:
y ¨ + α 1 y ˙ = β 0 u + d
where
α 1 = B J + K τ K E M F J ( R a + K p K H B K c l ) , β 0 = K A K p K H B K τ J ( R a + K p K H B K c l ) , d = R a τ d J ( R a + K p K H B K c l ) .
Then, α 1 is related to the damping due to the mechanical friction and the back electromotive force, β 0 is the input gain, and d is a function of the constant disturbance.
According to (17), the corresponding filtered version of (44) was the following:
y ¨ f + α 1 y ˙ f = β 0 u f + d
and the exponential decaying term d f t , due to the filtering, was neglected.
The approximation plane for (45) was obtained by defining the change of variables: ( x 1 , x 2 , x 3 ) = ( y ¨ f , u f , y ˙ f )
π 3 = ( x 1 , x 2 , x 3 ) R 3 x 1 + β 0 x 2 + α 1 x 3 d = 0 ,
with normal vector N = 1 β 0 α 1 . Applying the LSOD method to the plane (46), the cost function to be minimized to estimate β 0 , α 1 and d was determined by the following:
J L S O D = d 2 = 1 1 + β 0 2 + α 1 2 i = 1 ρ r i 2 ,
The residuals corresponded to r i = x 1 i + β 0 x 2 i + α 1 x 3 i d .
The LS method, as described in Section 3, was applied to fit the plane (46), considering ρ measurements, which resulted in the following:
r ¯ = r ¯ 1 r ¯ ρ R ρ × 1 , y = x 11 x 1 ρ R ρ × 1 , A 3 = x 31 x 21 1 x 3 ρ x 2 ρ 1 R ρ × 3 , p 3 = α 1 β 0 d R 3 × 1 .

5.2. Second Mathematical Model

The second mathematical model to be identified was obtained by neglecting the disturbance term d in (44), i.e.,
y ¨ + α 1 y ˙ = β 0 u ,
and the corresponding filtered version was provided by
y ¨ f + α 1 y ˙ f = β 0 u f .
The approximation plane with the change of variable ( x 1 , x 2 , x 3 ) = ( y ¨ f , u f , y ˙ f ) to be identified was the following:
π 2 = ( x 1 , x 2 , x 3 ) R 3 x 1 + β 0 x 2 + α 1 x 3 = 0 ,
with the normal vector N = 1 β 0 α 1 .
The LSOD method applied to the plane (51) to estimate β 0 and α 1 resulted in the cost function:
J L S O D = d 2 = 1 1 + β 0 2 + α 1 2 i = 1 ρ r i 2 ,
and the residuals were r i = x 1 i + β 0 x 2 i + α 1 x 3 i .
Now, considering ρ measurements, we applied the LS method to (51) yielded the following:
r ¯ = r ¯ 1 r ¯ ρ R ρ × 1 , y = x 11 x 1 ρ R ρ × 1 , A 2 = x 31 x 21 x 3 ρ x 2 ρ R ρ × 2 , p 2 = α 1 β 0 R 2 × 1 .

5.3. Third Mathematical Model

Finally, if only the input gain of the servo system needs to be estimated, the α 1 y ˙ and d terms are neglected in (44), thus yielding:
y ¨ = β 0 u ,
The filtered version of the above model is:
y ¨ f = β 0 u f .
Since the mathematical model (55) was a double-integrator system and considering the change of variable ( x 1 , x 2 ) = ( y ¨ f , u f ) , the straight line to be identified was the following:
= ( x 1 , x 2 ) R 2 x 1 + β 0 x 2 = 0
and the function to be minimized to estimate β 0 of the model (56) by the LSOD method was (34).
The LS method applied to fit the line (56) resulted in the following terms, considering ρ measurements:
r ¯ = r ¯ 1 r ¯ ρ R ρ × 1 , y = x 11 x 1 ρ R ρ × 1 , A 1 = x 21 x 2 ρ R ρ × 1 , p 1 = β 0 R .
As was previously mentioned, parameter identification using the LSOD method corresponds to a nonlinear minimization problem. Then, the minimization of the cost functions (47) and (52) was carried out by the fminunc MATLAB routine, whereas the cost function (34) was minimized using the LSOD analytical method, as described in Section 4.2. However, the parameter identification of the planes (46) and (51), and the line (56), was also carried out by the LS method through (22), together with (48), (53), and (57).

6. Servo System Parameter Identification Using Experimental Measurements

The experimental environment used for data acquisition and real-time experiments can be see in Appendix A. Since the models (44), (49), and (54) had similar information about the behavior of the servo system, the aim of the experiments was twofold: first, to estimate the parameters of these models, and second, to evaluate the identified models by designing and evaluating a trajectory-tracking controller.
However, it is well known that for parameter identification using the LS algorithm, a persistency-of-excitation (PE) condition has to be met in order to ensure the invertibility of A A in (23) [4,5,6]. To fulfill this condition, a test signal with a wide frequency spectrum is typically used. Examples of these signals include filtered white noise and pseudo-random binary signals. For the LSOD algorithm, and to the best of the authors’ knowledge, there is no previous work on how to choose an excitation test signal.
This work considered two test signals, namely filtered white noise and a sine wave. This choice allowed us to study the behavior of the LS and LSOD algorithms on signals with radically different spectra.
In order to perform parameter identification, a proportional derivative (PD) feedback control law was proposed to stabilize the servo system:
u = K p e K d y ˙ e
where e denotes the position error defined as e = y r y , y r is the reference, the position y is obtained by an optical encoder, and y ˙ e is an estimate obtained from the motor position measurements through the next filter
L y ˙ e L y = 300 s s + 300 300 s + 300 ,
where L { · } corresponds to the Laplace operator. The PD controller gains were adjusted empirically until obtaining an acceptable response from position, and then the proportional and derivative gains were set to K p = 2 and K d = 0.09 , respectively.
The measurements of the servo system were processed using the filters (18) and (19) to obtain estimates of the angular velocity and acceleration, respectively, which were not available from measurements (see Figure 4). The poles of the filters must be real repeated value to avoid overshoots and they must have fast dynamics. For this study, the filters were defined as follows:
y j 400 s s 2 + 40 s + 400 y ˙ j f y j 400 s 2 s 2 + 40 s + 400 y ¨ j f u j 400 s 2 + 40 s + 400 u j f
In order to apply the LS and the LSOD methods, the values of u j f , y ˙ j f , y ¨ j f , j = 1 , , ρ were acquired at the time instants t 1 , , t m (see Figure 4). For the LS method, this allowed us to build A 1 , A 2 , A 3 and y in (48), (53), and (57).

Experimental Identification Results

Table 1, Table 2 and Table 3 show the results of the parameter identification of the models (45), (50) and (55), where β 0 , α 1 , and d are the estimates of β 0 , α 1 and d, respectively. The white noise power and the sample time in the Simulink band-limited white noise block were 0.01 and 0.01 , respectively. The signal generator block producing a sine-wave signal was used with an amplitude of 0.5 s and a frequency of 0.25 Hertz. A zero-order hold (ZOH), with a sample time of 0.05 , was employed to obtain the samples and the experiments in each case were 10 s in duration, which provided ρ = 200 samples.
The parameter estimates produced by the LS and LSOD methods using the filtered white-noise signal were quite similar for the thee models. A well-known aspect of the PE condition in gradient and LS algorithms for adaptive control and parameter identification was that it guaranteed estimates that were close to those obtained in the absence of disturbances [17,18]. Therefore, the results obtained by the LS method for the three models were not surprising.
However, in the case of the sine-wave signal, the results varied. It is worth noting that the estimate β 0 produced by the LSOD algorithm remained close to a value of 40, regardless of the excitation signal and the identified model, which contrasted with the results obtained using the LS method. In addition, as shown in Table 1 and Table 2, there was also an increase in the estimation of parameter α 1 when the sine wave and the LSOD method were used in comparison with the white-noise results.
However, the LS method produced very poor estimates of β 0 for the three models when the sine-wave signal was employed as excitation. A possible explanation of these outcomes was that a sine wave has two spectral lines, which is theoretically enough to fulfill the persistent excitation (PE) condition in order to identify a single parameter [17]. However, the presence of disturbances and un-modeled terms introduced a significant bias in the estimate. However, the LS algorithm also produced poor estimates for the models with two and three parameters. In this regard, the LS method is an ill-conditioned problem [29], and it requires an input signal that satisfies the PE condition in order to obtain consistent estimates [4].
In addition, for the estimation of β 0 in model (54) using the LSOD method, the solution was the result of the quadratic Equation (37), and only the orthogonality condition (39) had to be met to obtain an estimate, see Table 4. This condition did not appear to be related to the frequency content of the excitation signal, thus indicating the possibility of using the LSOD method with smooth excitation signals in applications where more aggressive signals with high-frequency content cannot be employed for parameter identification purposes.

7. Real-Time Position Control and Servo System Model Validation

The estimated models (46), (51), and (56) were evaluated by using their parameter estimates for designing trajectory tracking feedback control laws. To evaluate the closed-loop performance of the integral of the squared errors ( I S E ), the integral of the absolute value of the error ( I A E ), the integral of the absolute value of the control ( I A C ), and the integral of the absolute value of the control variation ( I A C V ) indexes were employed. The I A C V evaluates the chattering level and high-frequency content of the control signal. These performance indexes [36] are defined as follows:
I S E = τ 1 τ 2 e 2 ( t ) d t , I A E = τ 1 τ 2 e ( t ) d t , I A C V = τ 1 τ 2 d u ( t ) d t d t , I A C = τ 1 τ 2 u ( t ) d t .
The time window for evaluating the above indexes was defined by τ 1 = 0 s and τ 2 = 20 s.
The time-varying reference y r and its time derivatives used in all the experiments were the following:
y r = y 1 y ˙ r = y ˙ 1 y ¨ r = a ω π y ˙ 2
which corresponds to the solution of the Duffing oscillator equation
y ˙ 1 = a ω π y 2 , y ˙ 2 = a ω π 0.25 y 2 + y 1 1.05 y 1 3 + 0.3 sin ( ω π t ) ,
with a = 0.5 , ω = 0.55 . The gains in all the cases described in the next subsections were tuned as K p = 225 and K d = 21 .

7.1. Servo System Model with Three Parameter Estimates

Consider the servo system model (44), y ¨ = α 1 y ˙ + β 0 u 3 + d , and its parameter estimates depicted in Table 1. The control law used for the trajectory-tracking task was the following:
u 3 = 1 β 0 y ¨ r + K p ( y r y ) + K d y ˙ r y ˙ e + α 1 y ˙ e d ,
Table 5 continues the performance indexes of the experiments using the control law (60). The closed-loop responses, the position errors, and the control signals are depicted in Figure 5, Figure 6 and Figure 7, respectively. The experimental results in Table 5 showed that the smallest values of the I S E and I A E performance indexes were obtained with the parameters produced by the LSOD method.
We noted that the control law (60), computed with the parameter estimates from the LS method under sine-wave excitation, produced the highest value of the I A C V index. This was due to the poor estimate of the input gain β 0 , which was the lowest in Table 1. Moreover, the values of the I A C index were similar for the two identification methods.

7.2. Servo System Model with Two Parameter Estimates

The servo system model (49), y ¨ = α 1 y ˙ + β 0 u 2 and the parameters in Table 2 were considered in this study. The control law used for the trajectory-tracking task was provided by the following:
u 2 = 1 β 0 y ¨ r + K p ( y r y ) + K d y ˙ r y ˙ e + α 1 y ˙ e
The closed-loop responses, the position errors, and the control signals are shown in Figure 8, Figure 9 and Figure 10, respectively. The experimental results from Table 6 indicated that the control law, computed using the parameter estimates, generated by the filtered white-noise signal and provided by the LSOD method produced the lowest values of the I S E and I A E indexes. However, when the parameters obtained with the sine-wave signal were employed in the control law, the I S E index, related to the LS algorithm, was the lowest but at the expense of an increased value of the I A C V index, i.e., with a control signal having significant high-frequency components. Table 2 shows that the value β 0 was too small, and then its inverse was large, thus producing a larger control signal. Further, the LSOD method produced acceptable performance with lower values of control signal activity, according to the I A C V index. Finally, we noted that all the index values related to the LSOD method did not exhibit significant differences for the two excitation signals.

7.3. Servo System Model with One Estimated Parameter

The next control law is designed for the servo system model (54), y ¨ = β 0 u 1 , with the parameter values displayed in Table 3
u 1 = 1 β 0 y ¨ r + K p ( y r y ) + K d y ˙ r y ˙ e
Figure 11, Figure 12 and Figure 13 depict the closed-loop responses, the position errors, and control signals, respectively.
According to Table 7, the performance indexes corresponding to the white-noise signal were similar for both the LS and LSOD methods. We noted that the I S E and I A E indexes, obtained with parameter estimates using the LS method with a sine-wave signal, were the smallest, but the values of the I A C V and I A C indexes were the highest. For model (49), a very low value of the estimate β 0 had a larger inverse, thus producing a large control activity. As a consequence, the control displayed large oscillations, as it could be verified in Figure 13. These signals could damage the servo system and/or the load it drives. We noted the LSOD method produced essentially the same performance, regardless of the excitation signal used for obtaining the parameter estimates.

7.4. Analysis of the Results Produced by the LSOD Method

Here, we analyzed the parameter estimates and the closed-loop performance obtained with the LSOD method, regarding the servo system models and the excitation signal.
Table 8 continues all the parameter estimates produced by the LSOD method. One interesting outcome was that the value of the input gain estimate β 0 remained in the range 38–43 for all the models and excitation signals. Furthermore, a consequence of the low variability of this estimate was that the closed-loop performance was similar for the same control law regardless of the excitation signal, as it could be verified in Table 9 and Table 10. These tables also showed that designing a control law using a model with more parameters improved performance. For example, the values of the I S E and I A E indexes considerably decreased using control law u 3 with three parameters, as compared to the control law u 1 , which only considered the input gain. These results were not surprising because more information was used for designing control laws u 2 and u 3 . However, the I A C V and I A C indexes had similar values in all the cases.
We should also emphasize that for the same control law, the choice of the excitation signal for parameter estimation appeared to have no influence on the closed-loop performance. A practical consequence of this was that the LSOD method could be used in systems where applying excitation signals with high-frequency content could be impractical or dangerous. Examples include quad-rotors, mobile robots, and servo systems driving robot manipulators, where the robot joints are driven though gearboxes.

8. Conclusions

In the previous sections, the least squares of orthogonal distances (LSOD) method was evaluated for the parameter identification of servo systems. The main conclusions of this study included the following:
  • The LSOD method could be a practical alternative to the classic least squares for parameter identification of linear time-invariant systems. The parameter identification problem was transformed into data fitting to a hyperplane, which was subsequently solved as a nonlinear minimization problem.
  • The LSOD method required well-established numerical routines based on the Newton method.
  • For the parameter identification of the servo model having only one parameter to identify, i.e., the input gain, the parameter estimate was computed analytically by solving a simple quadratic equation.
  • Three models of a servo system with one, two and three parameters were estimated using a sine wave and a filtered white noise signals. Tracking control laws were designed and experimentally tested for the three identified models. The overall best tracking performance was obtained when the tracking control laws were computed using the parameter estimates produced by the LSOD method irrespective of the signal used for exciting the servo system and the number of identified parameters.
The LSOD estimates showed consistency, regardless of the excitation signal spectra, which suggested the possibility of estimating the input gain in second-order electro-mechanical systems using this method, especially when it might be impractical or even dangerous to use aggressive test signals for parameter identification purposes. Future research could include applying the LSOD method for parameter identification in nonlinear systems and determining the fewest terms required in the mathematical models, using the methodology of Burton et al. [37] and the LSOD method.

Author Contributions

Conceptualization, C.V.-J. and R.G.; methodology, L.A.C.-C. and L.L.; software, L.A.C.-C. and L.L.; validation, C.V.-J., R.G. and E.A.; formal analysis, L.A.C.-C. and L.L.; investigation, L.A.C.-C., L.L. and E.A.; writing—original draft preparation, R.G., L.A.C.-C. and L.L.; writing—review and editing, E.A., L.A.C.-C. and L.L.; supervision, C.V.-J. and R.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Secretaría de Investigación y Posgrado SIP IPN under grant number 20227019.

Data Availability Statement

Not applicable.

Acknowledgments

Luis Luna and Erick Asiain want to thank the support of Martha Hernandez, Edgar Ramirez and Natisma Lopez from CECyT 9-IPN. The work of Luis Cantera, Luis Luna, Erick Asiain was supported by scholarships from CONACyT-MEXICO.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Servo System Experimental Setup

The servo system setup used in the experiments is shown in Figure A1. It consisted of the following components:
  • A Quanser QPID data acquisition card.
  • A DC tachogenerator plus an optical encoder from Servotek model 1024PTSA-7388F-1, which provided velocity and position measurements.
  • Two mechanically coupled brushed DC motors from Clifton Precision model JDTH-2250-BQ-IC. One was used for parameter identification, while the other was used as a load.
  • A pair of amplifiers, model 30A20AC PWM, from Advanced Motion Controls.
  • The control algorithms were implemented using the MATLAB/Simulink programming platform under the Quanser QUARC real-time control environment and using a sampling period of 1 ms and the Euler-ode1 integration method.
Figure A1. Experimental Setup.
Figure A1. Experimental Setup.
Mathematics 11 01238 g0a1

References

  1. Bard, Y. Nonlinear Parameter Estimation; Academic Press: New York, NY, USA, 1974. [Google Scholar]
  2. Englezos, P.; Kalogerakis, N. Applied Parameter Estimation for Chemical Engineers; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar]
  3. Sorenson, H.W. Parameter Estimation: Principles and Problems; M. Dekker: New York, NY, USA, 1980; Volume 9. [Google Scholar]
  4. Åström, K.J.; Eykhoff, P. System identification—A survey. Automatica 1971, 7, 123–162. [Google Scholar] [CrossRef] [Green Version]
  5. Ljung, L. System Identification: Theory for User; Prentice Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  6. Isermann, R.; Münchhof, M. Identification of Dynamic Systems: An Introduction with Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  7. Petras, I.; Podlubny, I. Least Squares or Least Circles? A comparison of classical regression and orthogonal regression. Chance 2010, 23, 38–42. [Google Scholar] [CrossRef]
  8. Adcock, R.J. Note on the method of least squares. Analyst 1877, 4, 183–184. [Google Scholar] [CrossRef]
  9. Adcock, R.J. A problem in least squares. Analyst 1878, 5, 53–54. [Google Scholar] [CrossRef]
  10. Ahn, S.J. Least Squares Orthogonal Distance Fitting of Curves and Surfaces in Space; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004; Volume 3151. [Google Scholar]
  11. Ahn, S.J.; Rauh, W.; Warnecke, H.J. Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola. Pattern Recognit. 2001, 34, 2283–2303. [Google Scholar] [CrossRef]
  12. Penner, A. Fitting Splines to a Parametric Function; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  13. López, A.; Olazagoitia, J.L.; Marzal, F.; Rubio, M.R. Optimal parameter estimation in semi-empirical tire models. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2019, 233, 73–87. [Google Scholar] [CrossRef]
  14. Poch, J.; Villaescusa, I. Orthogonal distance regression: A good alternative to least squares for modeling sorption data. J. Chem. Eng. Data 2012, 57, 490–499. [Google Scholar] [CrossRef]
  15. Keles, T. Comparison of Classical Least Squares and Orthogonal Regression in measurement error models. Int. Online J. Educ. Sci. 2018, 10, 200–214. [Google Scholar]
  16. Åström, K.J. Lectures on the Identification Problem: The Least Squares Method; Technical Report; Division of Automatic Control, Lund Institute of Technology: Lund, Sweden, 1968. [Google Scholar]
  17. Sastry, S.; Bodson, M. Adaptive Control: Stability, Convergence, and Robustness; Prentice-Hall, Inc.: Hoboken, NJ, USA, 1989. [Google Scholar]
  18. Ioannou, P.A.; Sun, J. Robust Adaptive Control; Prentice-Hall, Inc.: Hoboken, NJ, USA, 1995. [Google Scholar]
  19. Keesman, K.J. System Identification: An Introduction; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  20. Garrido, R.; Miranda, R. DC servomechanism parameter identification: A closed loop input error approach. ISA Trans. 2012, 51, 42–49. [Google Scholar] [CrossRef] [PubMed]
  21. Abdulle, A.; Wanner, G. 200 years of least squares method. Elem. Der Math. 2002, 57, 45–60. [Google Scholar] [CrossRef] [Green Version]
  22. Bjorck, A. Numerical Methods for Least Squares Problems; SIAM: Philadelphia, PA, USA, 1996; Volume 51. [Google Scholar]
  23. Lawson, C.L.; Hanson, R.J. Solving Least Squares Problems; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
  24. Terzic, B.; Jadric, M. Design and implementation of the extended Kalman filter for the speed and rotor position estimation of brushless DC motor. IEEE Trans. Ind. Electron. 2001, 48, 1065–1073. [Google Scholar] [CrossRef]
  25. Andreescu, G.D.; Rabinovici, R. Torque-speed adaptive observer and inertia identification without current transducers for control of electric drives. Proc. Rom. Acad. Ser. A 2003, 4, 8. [Google Scholar]
  26. Cortez-Vega, R.; Maldonado, J.; Garrido, R. Parameter Identification using PSO under measurement noise conditions. In Proceedings of the 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), Paris, France, 23–26 April 2019; pp. 103–108. [Google Scholar]
  27. Cantera, L.A.C.; Luna, L.; Vargas-Jarillo, C.; Garrido, R. Parameter estimation of a linear ultrasonic motor using the least squares of orthogonal distances algorithm. In Proceedings of the 2019 16th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 9–11 November 2019; pp. 1–6. [Google Scholar]
  28. Ogata, K. Modern Control Engineering; Prentice & Hall Inc.: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  29. Atkinson, K.E. An Introduction to Numerical Analysis; John Wiley & Sons: New York, NY, USA, 2008. [Google Scholar]
  30. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; SIAM: Hoboken, NJ, USA, 1996. [Google Scholar]
  31. MathWorks. Optimization Toolbox User’s Guide; MathWorks: Natick, MA, USA, 2020. [Google Scholar]
  32. Firoozian, R. Servo Motors and Industrial Control Theory; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  33. Nakamura, M.; Goto, S.; Kyura, N. Mechatronic Servo System Control: Problems in Industries and Their Theoretical Solutions; Springer: Berlin/Heidelberg, Germany, 2004; Volume 300. [Google Scholar]
  34. Han, J. From PID to active disturbance rejection control. IEEE Trans. Ind. Electron. 2009, 56, 900–906. [Google Scholar] [CrossRef]
  35. Sariyildiz, E.; Ohnishi, K. Stability and robustness of disturbance-observer-based motion control systems. IEEE Trans. Ind. Electron. 2014, 62, 414–422. [Google Scholar] [CrossRef] [Green Version]
  36. Dorf, R.C.; Bishop, R.H. Modern Control Systems; Pearson: New York, NY, USA, 1998. [Google Scholar]
  37. Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Straight line fitting problem. (a) Classic least squares algorithm.; (b) Least squares of orthogonal distances algorithm.
Figure 1. Straight line fitting problem. (a) Classic least squares algorithm.; (b) Least squares of orthogonal distances algorithm.
Mathematics 11 01238 g001
Figure 2. Approximation plane.
Figure 2. Approximation plane.
Mathematics 11 01238 g002
Figure 3. Block diagram of a servo system model.
Figure 3. Block diagram of a servo system model.
Mathematics 11 01238 g003
Figure 4. Servo system signal filtering block diagram.
Figure 4. Servo system signal filtering block diagram.
Mathematics 11 01238 g004
Figure 5. Reference and servo system angular positions produced by control law (60) with parameter estimates β 0 , α 1 and d .
Figure 5. Reference and servo system angular positions produced by control law (60) with parameter estimates β 0 , α 1 and d .
Mathematics 11 01238 g005
Figure 6. Position errors produced by control law (60) with parameter estimates β 0 , α 1 and d .
Figure 6. Position errors produced by control law (60) with parameter estimates β 0 , α 1 and d .
Mathematics 11 01238 g006
Figure 7. Control signals produced by control law (60) with parameter estimates β 0 , α 1 and d .
Figure 7. Control signals produced by control law (60) with parameter estimates β 0 , α 1 and d .
Mathematics 11 01238 g007
Figure 8. Reference and servo system angular positions produced by control law (61) with parameter estimates β 0 and α 1 .
Figure 8. Reference and servo system angular positions produced by control law (61) with parameter estimates β 0 and α 1 .
Mathematics 11 01238 g008
Figure 9. Position errors produced by control law (61) with parameter estimates β 0 and α 1 .
Figure 9. Position errors produced by control law (61) with parameter estimates β 0 and α 1 .
Mathematics 11 01238 g009
Figure 10. Control signals produced by control law (61) with parameter estimates β 0 and α 1 .
Figure 10. Control signals produced by control law (61) with parameter estimates β 0 and α 1 .
Mathematics 11 01238 g010
Figure 11. Reference and servo system angular positions produced by control law (62) with the parameter estimate β 0 .
Figure 11. Reference and servo system angular positions produced by control law (62) with the parameter estimate β 0 .
Mathematics 11 01238 g011
Figure 12. Position errors produced by control law (62) with the parameter estimate β 0 .
Figure 12. Position errors produced by control law (62) with the parameter estimate β 0 .
Mathematics 11 01238 g012
Figure 13. Control signals produced by control law (62) with the parameter estimate β 0 .
Figure 13. Control signals produced by control law (62) with the parameter estimate β 0 .
Mathematics 11 01238 g013
Table 1. Parameter identification of the servo system model (44).
Table 1. Parameter identification of the servo system model (44).
Model ParameterWhite-Noise SignalSine-Wave Signal
LSLSODLSLSOD
β 0 40.80966341.40223015.30098138.682501
α 1 2.1425942.1740902.2690665.868168
d −1.653451−1.678005−0.841450−2.008895
Table 2. Parameter identification of the servo system model (49).
Table 2. Parameter identification of the servo system model (49).
Model ParameterWhite-Noise SignalSine-Wave Signal
LSLSODLSLSOD
β 0 39.92489141.3808297.52002242.460353
α 1 2.1117032.1897211.1230416.704563
Table 3. Parameter identification of the servo system model (54).
Table 3. Parameter identification of the servo system model (54).
Model ParameterWhite-Noise SignalSine Wave Signal
LSLSODLSLSOD
β 0 37.67989141.4616383.48477539.823089
Table 4. Orthogonality condition x 1 x 2 (39) and the value of Minimum condition J L S O D ( β 0 ) (42) for the parameter identification of the servo system model (54) using the LSOD method.
Table 4. Orthogonality condition x 1 x 2 (39) and the value of Minimum condition J L S O D ( β 0 ) (42) for the parameter identification of the servo system model (54) using the LSOD method.
ConditionWhite-Noise SignalSine-Wave Signal
x 1 x 2 617.3497308.607933
J L S O D ( β 0 ) 0.0173120.000272
Table 5. Performance indexes produced by control law (60) with parameter estimates β 0 , α 1 and d .
Table 5. Performance indexes produced by control law (60) with parameter estimates β 0 , α 1 and d .
IndexWhite-Noise SignalSine-Wave Signal
LSLSODLSLSOD
I S E 18.621516.489813.818111.9049
I A E 16.030515.140713.646312.4864
I A C V 9.74149.928523.143610.0730
I A C 1.72901.69771.78121.7345
Table 6. Performance indexes produced by control law (61) with parameter estimates β 0 and α 1 .
Table 6. Performance indexes produced by control law (61) with parameter estimates β 0 and α 1 .
IndexWhite-Noise SignalSine-Wave Signal
LSLSODLSLSOD
I S E 34.794929.290016.099724.0802
I A E 21.879920.053814.3486518.57136
I A C V 9.948699.846645.64389.1073
I A C 1.72311.69631.80441.6745
Table 7. Performance indexes produced by control law (62) with the parameter estimates β 0 .
Table 7. Performance indexes produced by control law (62) with the parameter estimates β 0 .
IndexWhite-Noise SignalSine-Wave Signal
LSLSODLSLSOD
I S E 45.128145.395216.463647.1772
I A E 25.192225.579213.975525.8390
I A C V 10.652210.3371200.360310.2424
I A C 1.73991.70532.66601.7106
Table 8. Parameter estimates obtained by the LSOD method.
Table 8. Parameter estimates obtained by the LSOD method.
Parameter
Estimates
White-Noise SignalSine-Wave Signal
Model 3Model 2Model 1Model 3Model 2Model 1
β 0 41.46163841.38082941.40223039.82308942.46035338.682501
α 1 2.1897212.1740906.7045635.868168
d −1.678005−2.008895
Table 9. Performance indexes using the LSOD method and white noise as an excitation signal using the control laws (60)–(62).
Table 9. Performance indexes using the LSOD method and white noise as an excitation signal using the control laws (60)–(62).
Control LawParameter Estimates ISE IAE IACV IAC
u 1 β 0 45.395225.579210.33711.7053
u 2 β 0 , α 1 29.2920.05389.84661.6963
u 3 β 0 , α 1 , d 16.489815.14079.92851.6977
Table 10. Performance indexes using the LSOD method and a sine wave as excitation signal using the control laws (60)–(62).
Table 10. Performance indexes using the LSOD method and a sine wave as excitation signal using the control laws (60)–(62).
Control LawParameter
Estimates
ISE IAE IACV IAC
u 1 β 0 47.177225.839010.24241.7106
u 2 β 0 , α 1 24.080218.571369.10731.6745
u 3 β 0 , α 1 , d 11.904912.486410.07301.7345
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cantera-Cantera, L.A.; Garrido, R.; Luna, L.; Vargas-Jarillo, C.; Asiain, E. Identification of Linear Time-Invariant Systems: A Least Squares of Orthogonal Distances Approach. Mathematics 2023, 11, 1238. https://doi.org/10.3390/math11051238

AMA Style

Cantera-Cantera LA, Garrido R, Luna L, Vargas-Jarillo C, Asiain E. Identification of Linear Time-Invariant Systems: A Least Squares of Orthogonal Distances Approach. Mathematics. 2023; 11(5):1238. https://doi.org/10.3390/math11051238

Chicago/Turabian Style

Cantera-Cantera, Luis Alberto, Rubén Garrido, Luis Luna, Cristóbal Vargas-Jarillo, and Erick Asiain. 2023. "Identification of Linear Time-Invariant Systems: A Least Squares of Orthogonal Distances Approach" Mathematics 11, no. 5: 1238. https://doi.org/10.3390/math11051238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop