Next Article in Journal
Rethinking Linear Regression: Simulation-Based Insights and Novel Criteria for Modeling
Previous Article in Journal
A Two-Stage Numerical Algorithm for the Simultaneous Extraction of All Zeros of Meromorphic Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Matrix Linear Diophantine Equation-Based Digital-Adaptive Block Pole Placement Control for Multivariable Large-Scale Linear Process

by
Belkacem Bekhiti
1,
Kamel Hariche
2,
Abdellah Kouzou
3,
Jihad A. Younis
4 and
Abdel-Nasser Sharkawy
5,6,*
1
Institute of Aeronautics and Space Studies (IASS), University of Blida, 1 BP 270, Blida 09000, Algeria
2
Institute of Electrical and Electronic Engineering (IGEE), University of Boumerdes, Boumerdes 35000, Algeria
3
Laboratory of Applied Automation and Industrial Diagnostics (LAADI), Faculty of Science and Technology, Ziane Achour University of Djelfa, Djelfa 17000, Algeria
4
Department of Mathematics, Aden University, Aden P.O. Box 6014, Yemen
5
Mechanical Engineering Department, Faculty of Engineering, South Valley University, Qena 83523, Egypt
6
Mechanical Engineering Department, College of Engineering, Fahad Bin Sultan University, Tabuk 47721, Saudi Arabia
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(4), 139; https://doi.org/10.3390/appliedmath5040139
Submission received: 19 August 2025 / Revised: 23 September 2025 / Accepted: 24 September 2025 / Published: 7 October 2025

Abstract

This paper introduces a digital adaptive control framework for large-scale multivariable systems, integrating matrix linear Diophantine equations with block pole placement. The main innovation lies in adaptively relocating the full eigenstructure using matrix polynomial representations and a recursive identification algorithm for real-time parameter estimation. The proposed method achieves accurate eigenvalue placement, strong disturbance rejection, and fast regulation under model uncertainty. Its effectiveness is demonstrated through simulations on a large-scale winding process, showing precise tracking, low steady-state error, and robust decoupling. Compared with traditional non-adaptive designs, the approach ensures superior performance against parameter variations and noise, highlighting its potential for high-performance industrial applications.

1. Introduction

The design of adaptive multivariable controllers has progressed substantially with advances in model identification, pole placement theory, and computational control methods. Early studies [1,2,3] introduced instrumental and prediction-error techniques for identifying Linear matrix fraction description (LMFD) models, providing a structured foundation for multivariable system representation. Building on these principles [4], later works extended polynomial eigenvalue formulations to MIMO systems [5,6,7], offering deeper insights into eigenstructure assignment. Foundational research on matrix polynomials and interpolation [8,9,10,11] was further developed in the context of industrial multivariable process identification [12,13,14]. These contributions paved the way for adaptive control schemes such as [15], which integrate recursive estimation and robust design strategies [16,17,18]. Classical control theory [19,20,21] emphasized the role of controllability and observability in pole assignment, while mathematical formulations [22,23,24,25] provided Diophantine and polynomial equation solutions essential for digital control and their application to block pole placement in winding processes [26,27]. More advanced matrix polynomial approaches [28,29,30,31] enabled robust block structure relocation and eigenvalue assignment, supported by system-theoretic formulations and modeling tools [32,33,34,35,36] that facilitated practical MIMO compensator synthesis. Classical non-adaptive control methods, though well established, degrade under parameter variations, model uncertainties, and external disturbances. Their reliance on fixed compensator structures and focus on eigenvalue assignment limit flexibility in shaping the full eigenstructure and reduce robustness in multivariable systems. To address these issues, the proposed approach employs a matrix linear Diophantine framework with adaptive block pole placement, enabling online parameter adjustment and accurate eigenstructure assignment under uncertainty. Recent works [37,38,39,40,41] have explored neural network control, optimization, and AI-assisted pole placement for nonlinear and large-scale systems. In this context, the present study introduces an adaptive digital block pole placement strategy tailored to large-scale MIMO winding processes, ensuring precise closed-loop performance under real-time conditions.
This paper proposes a novel design methodology for MIMO adaptive compensators, offering greater flexibility in assigning system eigenstructure through block poles. Unlike conventional approaches, this method enables the placement of more than just the original set of desired eigenvalues, thereby enhancing the robustness and performance of digital MIMO control systems. To the best of the authors’ knowledge, previous research has not explored adaptive block pole placement in digital systems represented by matrix fractions for eigenstructure assignment via dynamic compensators. This paper is structured as follows. Section 2 reviews existing MIMO system identification algorithms. Section 3 introduces the proposed adaptive block pole placement technique using digital compensator design. In Section 4, an application to the discrete adaptive control of a winding process is presented to demonstrate the effectiveness of the method. Finally, Section 5 provides concluding remarks and potential future research directions.

2. Theoretical Preliminaries for MIMO Polynomial Systems

2.1. Operator-Theoretic Foundations

This section provides the math-foundation for the proposed method, presenting operator-theoretic concepts, rational matrix-valued functions, spectral projectors, and other related algebraic topics. These tools are essential for analyzing matrix polynomial systems and designing compensators by allowing e i g e n s t r u c t u r e assignment. The e i g e n s p a c e decomposition and projector-based techniques define the spectral structure of the closed-loop system. High-order difference systems (polynomial operators) often involve large-dimensional matrices, which has led to a renewed interest in rational matrix polynomial and state-space representations [42]. Polynomial systems theory in discrete time relies on the properties of polynomial matrices over the shift operator, using polynomial operator algebra (a natural extension of classical operational calculus based on the z-transform). To clarify these concepts, consider the finite-dimensional Hilbert spaces U and Y representing the input and output spaces, respectively. Let the forward shift operator  σ be defined by σ x k = x k + 1 , understood as a bounded linear shift operator on a sequence space D σ L 2 Z , U . Two operator-valued matrix polynomials are defined: D L σ = i = 0 D L i σ i B Y and N L σ = i = 0 N L i σ i B U , Y , where D L i R p × p B Y , N L i   R p × m B U , Y , and B . denotes the space of bounded linear operators between corresponding spaces. The discrete-time polynomial system can be formalized as D L σ y k = N L σ u k , with u k D N L σ and y k D D L σ , both contained in appropriate sequence spaces (e.g., sequence spaces L 2 Z , U , L 2 Z , Y ). By applying the z-transform and assuming that D L z is left-invertible in the ring of rational operator-valued functions, we obtain the input–output map y z = H z . u z with H z = D L 1 z N L z , where H z B U , Y is a rational matrix-valued function defined for z Υ C in a suitable domain of the complex plane. Alternatively, one may define a right matrix fraction description of H z as H z = N R z D R 1 z , where D R z = i = 0 D R i z i B U and N R z = i = 0 N R i z i B U , Y . Such a rational operator-valued function H z admits a state-space realization under the discrete-time realization theory of infinite-dimensional linear systems [22]. Here is a realization theorem for rational operator-valued functions:
Theorem 1.
Let U , Y , a n d   X be finite-dimensional Hilbert spaces (e.g., R m   ;   R p   ;   R n   ) and let the operator  H : Υ C B U , Y be a proper rational operator-valued function, analytic on a connected open set  Υ C  containing the resolvent set of some operator. Then, there exists a quadruple of bounded linear operators A ,   B ,   C ,   D B X × B U , X × B X , Y × B U , Y , with A : X X , B : U X , C : X Y and D B U , Y , such that for all z ρ A (resolvent set of A ), H z = C z I A 1 B + D , where z I A 1 is the discrete-time resolvent operator of A . Moreover, the realization is minimal if the following conditions are met:
  • s p a n A k B u : u U , k 0 is dense in X (controllability);
  • s p a n C A k x : x X , k 0 is dense in Y (observability).
Specifically, there exist two canonical minimal realizations, A c ,   B c ,   C c ,   D c and A o ,   B o ,   C o ,   D o ,  called controllability and observability realizations (respectively), with
H z = N R z D R 1 z = C c ( z I A c ) 1 B c + D c               o r               H z = D L 1 z N L z = C o ( z I A o ) 1 B o + D o   w i t h               D R z = i = 0 D R i z i ;         N R z = i = 0 N R i z i ;           D L z = i = 0 D L i z i ;         N L z = i = 0 N L i z i
Proof. 
We define a hierarchical sequence of abstract state variables x i z i = 1 H , where H = L 2 Z , R m is the Hilbert space of square-summable sequences, and each belongs to the image of the resolvents of z , by recursive relations derived from the inverse powers of z . That is, x 1 z = z 1 N L u z D L y z   x z = z 1 N L 1 u z D L 1 y z + x 1 z . After recursive substitution and rearrangement, the linear time-invariant (LTI) dynamics can be cast into a first-order operator difference system:
z x 1 z x 2 z x z = D L I P       D L 2   I P D L 1 x 1 z x 2 z x z + N L D L N L 0 N L 1 D L 1 N L 0 u z y z =         I p x 1 z x z     +   N L 0 u z
where ⭘ is a p t h order null matrix, I p is a p t h order identity matrix, and x i t R p . More compactly, we can write z . x o z = A o x o z + B o u z ;   y z = C o x o z + D o u z , where D o = N L 0 is the direct feedthrough matrix, x o t R p collects the auxiliary observable state variables, A o R p × p   is an observable block companion matrix constructed from D L i , B o R p × m   is the algebraic dependencies between input and x o t , and C o R p × p   is the algebraic dependencies between output and x o t . Finally, we have H z = D L 1 z N L z = C o ( z I A o ) 1 B o + D o R z p × m .
Similarly, the system admits a right MFD description y z = N R z D R 1 z u z , which can be expanded as
y z = N R 0 + i = 0 N R i N R 0 D R i z i i = 0 D R i z i 1 u z = N R 0 u z + y n e w z w h i c h   m e a n s   t h a t                   i = 0 N R i N R 0 D R i z i 1 y n e w z = i = 0 D R i z i 1 u z = Q z
This last equation gives z Q z = u z D R 1 z 1 Q z + D R 2 z 2 Q z + + D R Q z and y n e w z = N R 1 N R 0 D R 1 z 1 Q z + + N R N R 0 D R Q z . Now, we define a sequence of state variables, x k z = z k 1 Q z ,   w h e r e   k = 1 , and by recursive formulation, we obtain z x 1 z = x 2 z ;   z x 2 z = x 3 z ; ;   z x z = z Q z and the new output is given by y n e w z = N R 0 u z + i = 1 N R i N R 0 D R i x i + 1 z . It is very easy to check that x z = z 1 Q z = u z i = 1 D R i x + 1 i z ; so, in matrix form we can write
z x 1 z x z = L m         I m D R   D R 1 + x 1 z x z + I m u z y z = N R N R 0 D R     N R 1 N R 0 D R 1 x 1 z x z     +   N R 0 u z
Or, more compactly z x c z = A c x c z + B c u z ;   y z = C c x c z + D c u z , where x c t R m collects the auxiliary controllable state variables, A c R m × m   is a controllable block companion matrix constructed from D R i , B c R m × m   is the algebraic dependencies between input and x c t , C c R p × m   is the algebraic dependencies between output and x c t , and D c = N R 0 is the direct feedthrough matrix. Thus, the transfer function H z is obtained as H z = N R z D R 1 z = C c ( z I     A c ) 1 B c + D c .
It is understood that the state-space representation is not unique irrespective of the transfer matrix which is unique. This means that an infinity of state-space representation should belong to the same system, due to the fact that any operator will have an infinite equivalent form related by isomorphism. That is, if we have a state-space representation z x 1 z = A 1 x 1 z + B 1 u z ;   y z = C 1 x 1 z + D 1 u z and x 1 = T x 2 , then in the new base, we have z x 2 z = A 2 x 2 z + B 2 u z ;   y z = C 2 x 2 z + D 2 u z , with A 2 = T 1 A 1 T ;   B 2 = T 1 B 1 ;   C 2 = C 1 T ; and D 2 = D 1 . In terms of transfer matrix, we have
H z = C 2 ( z I A 2 ) 1 B 2 + D 2 = C 1 T ( z I T 1 A 1 T ) 1 T 1 B 1 + D 1 = C 1 ( z I A 1 ) 1 B 1 + D 1
Hence, every finite-dimensional discrete-time LTI system with a polynomial difference representation admits both left and right matrix fraction descriptions in the ring R z p × m , and can be realized canonically in state-space form (Q.E.D). □
Note. 
The ring R z of real-coefficient polynomials in the shift variable z can be naturally embedded in the ring of rational matrix functions R z , and the structure of H z reflects its fractional ideal form in this ring, supporting a dual interpretation via left and right coprime factorizations [12,15,22].
Definition 1.
Let  = A , B , C , D  be a finite-dimensional LTI system defined over Hilbert spaces  X = R n , U = R m , and  Y = R p , governed by  z x z   =   A x z   +   B u z and y z   =   C x z   +   D u z , where  A B X ; B B U , X ; C B X , Y ;   a n d   D B U , Y . The system   is said to be exactly controllable on the finite time horizon  k 0 , k 1  if for every pair of states x 0 ,   x 1 X , there exists a sequence of inputs  u L 2 k 0 , k 1 , U ,  such that the solution  x k L 2 k 0 , k 1 , X  of the system satisfies  x k 0 = x 0 ,   x k 1 = x 1 .
Definition 2.
The system   is said to be exactly observable on the finite time horizon  k 0 , k 1  if for any  x 0 X , the knowledge of the output trajectory  y k Y  for  k k 0 , k 1 , corresponding to zero input  u k = 0 , allows for the unique determination of the initial condition  x k 0 = x 0 .
To formally characterize exact controllability and exact observability in discrete-time finite-dimensional LTI systems, the following theorem provides the necessary and sufficient algebraic and analytic conditions in terms of rank tests on the controllability and observability matrices, and spectral properties of the shift operator realization.
Theorem 2
(Kalman’s Rank Condition [21]). The Gramians of the linear system are given by integral formula W c 0 ,   N = i = 0 N 1 A i B B A i  and the controllability matrix is given by Ω c   =   B     A B     A n 1 B   R n × n m . The necessary and sufficient condition for the linear time invariant system to be completely state-controllable is given by one of the following conditions:
(i) 
W c 0 , N  is nonsingular, or equivalently W c 0 , N  is a positive definite (PD) matrix.
(ii) 
Ω c  is full rank, i.e., r a n k Ω c = n . Equivalently  W c 0 ,   N = Ω c Ω c  is nonsingular.
The observability Gramians of linear system are given by  W o 0 ,   N = i = 0 N 1 A i C C A i  and the observability matrix is  Ω o   =   C     C A     C A n 1   R n × n m . The necessary and sufficient condition for the LTI system to be completely state-observable is given by one of the following conditions:
(i) 
W o 0 ,   N  is nonsingular, or equivalently  W o 0 ,   N  is positive definite (PD) matrix.
(ii) 
Ω o    is full rank, i.e., r a n k Ω o   = n . Equivalently  W o 0 ,   N = Ω o Ω o  is nonsingular.
Proof. 
See [21,22].
Without loss of generality, setting k 0 = 0 and x N = 0 , the discrete time system is completely controllable if for every initial state x 0 X , there exists a sequence of inputs u L 2 0 , N , U , such that the corresponding trajectory satisfies x N = 0 . A necessary and sufficient condition for complete controllability is the s u r j e c t i v i t y of the controllability map: C 0 , t 1   :   L 2 0 , N , U X , u i = 0 N 1 A i B u N 1 i . In algebraic terms, this is equivalent to Kalman’s rank condition: r a n k B     A B A n 1 B   = n [8].
Similarly, the system is observable if, and only if, the observability operator O k 0 , N : X L 2 k 0 , N , Y ,   x o C A k B x 0 k = 0 N 1 is i n j e c t i v e , i.e., O k 0 , N x 0 = 0   x 0 = 0 . That is, the state can be recovered uniquely from the output measurement over the time interval. This is equivalent to r a n k C     C A     C A n 1   = n [12,13]. □
Theorem 3
(Popov–Belevitch–Hautus Test). Given an LTI system x k + 1 = A x k + B u k  and  y k = C x k + D u k ,  then the two tests for controllability are as follows:
PBH Rank Test:  A ,   B  is controllable if  r a n k z I A ,   B = n  for any  z     σ A
PBH Eigenvector Test:  A ,   B  is controllable if there exists no left eigenvector of  A  orthogonal to the columns of  B ; that is { w A = z w   a n d   w B = 0 }  only if  w = 0
The two Popov–Belevitch–Hautus tests for observability are as follows:
PBH Rank Test: A , C  is observable if  r a n k z I A ,   C = n  for any  z     σ A .
PBH Eigenvector Test: A , C  is observable if there exists no right eigenvector of  A  orthogonal to the columns of  C ; that is { A v = z v   a n d   C v = 0 }  only if  v = 0
Note. 
In the discrete-time setting, the controllability, observability, and cross-Gramians are also characterized as solutions to discrete Lyapunov or Sylvester equations. The controllability Gramian W c satisfies W c = A W c A + B B . The observability Gramian W o satisfies the dual Lyapunov equation: W o = A W o A + + C C [17,18,19].

2.2. Eigenspace Decomposition and Projectors

The eigenspace decomposition of a matrix A along its spectral projectors leads to powerful insights, particularly in control theory, model reduction, and frequency-domain analysis. Let A C n × n be a diagonalizable matrix. Then there exists a complete set of spectral projectors  E i i = 1 s corresponding to its distinct eigenvalues λ i i = 1 s , such that A = i = 1 s λ i E i with E i E j = δ i j E i , i = 1 s E i =   I , and each projector E i satisfies E i 2 = E i , and E i A = A E i = λ i E i . This gives the spectral decomposition of A in terms of its eigen-subspaces: C n = i = 1 s I m E i . If A is not diagonalizable, we generalize the decomposition using Dunford’s decomposition: A = A d   +   N , where A d   =   i = 1 s λ i E i , N = i = 1 s N i , and A d N N A d = 0 , with N i nilpotent operators acting on the generalized eigenspaces [43,44]. Now, let us explore how spectral projectors are constructed from left and right eigenvectors of A . Assume that A C n × n and suppose λ i C is a simple eigenvalue of A . Then, A v i = λ i v i and w i A = λ i w i , where v i , w i are right/left eigenvectors of A . We can normalize the eigenvectors such that w i v i = 1 and, therefore, A i = 1 n v i w i = i = 1 n λ i v i w i but i = 1 n v i w i = I ; hence, E i = v i w i and A = i = 1 n λ i E i . Moreover, A v 1   v n = v 1   v n d i a g λ 1 λ n or equivalently, A = V Λ W . But, if A is diagonalizable and eigenvalues are repeated, then there exist a set v i j ,   which is a basis of right eigenvectors associated with λ j , and { w i j } is the dual left eigenvectors; in such case, E j = i = 1 m j v i j w i j , with A = k = 1 s λ k E k . In light of this, consider the multivariable LTI system described by the transfer matrix, defined as
H z = N z D z 1 = C z I A 1 B + D = C V z I Λ 1 W B + D
where
  A   R n × n ;     B R n × m ;       C R p × n ;     D R p × m           v i   : Right   eigenvector   of     A .                                                                       w i :     Left   eigenvector   of     A .                                                                                             λ i :     The   eigenvalues   of     A .                                                                                 V = v 1 v 2 v n ;                       W = V 1                                                     W = w 1 w 2 w n ;             Λ = λ 1   0     0   λ n    
The transfer function can be expressed in terms of its eigenvalues (i.e., assume that matrix D is non-zero) [21,22,23]
H z = i = 1 n C v i w i B z λ i + D = H 1 z λ 1 + + H n z λ n + D
with   H i = C v i w i B = z λ i H z D z = λ i . If A R n × n is diagonalizable then can be rewritten as:
A = V Λ W = i = 1 n λ i v i w i         a n d         H i = C E i B
The partial fraction expansion of H z , shows that each mode λ i contributes a rank-one (or low-rank) residue H i = C E i B to the total transfer behavior. Each term H i / z λ i represents the modal response associated with eigenvalue λ i . The rank and norm of H i quantify the contribution of mode λ i to input–output behavior [14,15]. For a causal system with state update x k + 1 = A x k + B u k   and output y k = C x k + D u k , the impulse response h k   k 0 is h 0 = D       a n d     h k = C A k 1 B     f o r       k 1 ; that is,
          h k = C A k 1 B u k 1 + D δ k
Using the spectral decomposition (when A is diagonalizable), A k = i = 1 n λ i k E i , so
h 0 = D ,                     h k = i = 1 n λ i k 1 C E i B           ( k 1 )
Thus, each modal term is a geometric sequence λ i k 1 scaled by the matrix C E i B . Stability condition: the system is (asymptotically) stable if every eigenvalue satisfies λ i < 1 .
In the case of non-diagonalizable matrices A = i = 1 s λ i E i + N i the resolvent expansion with Jordan blocks (finite order poles), near z = λ i is [44]
A k = i = 1 s j = 0 ν i 1 k ! N i j E i j ! k j ! λ i k j             z I A 1 = i = 1 s j = 1 ν i R i j z λ i j
where ν i is the size of the largest Jordan block for λ i , and the matrices R i j are expressible in terms of E i and powers of N i = A λ i I E i (i.e., R i j = N i j 1 E i , and, in particular, R i 1 = E i when λ i is simple). Consequently,
h k = C A k 1 B = i = 1 s j = 1 ν i k 1 ! C N i j 1 E i B j 1 ! k j ! λ i k j ,                     f o r   k 1
These combinatorial factors produce the polynomial growth in k , appearing with Jordan blocks.

2.3. Block Canonical Forms for MIMO Systems

Consider a discrete-time LTI system represented by the stated equation:
x k + 1 = A x k + B u k           a n d               y k = C x k + D u k
where x k R n k , y k R p k , u k R m k , A R n × n , B R n × m , and C R p × n
Definition 3
([6]). The system described by (11) is said to be the block controllable of index  if the matrix Ω c = r o w A i B i = 0 1  has full rank and = n / m is an integer. In this context, the operation   r o w X i i = 0 1  is the row-wise concatenation of the matrices X i   for i = 0,1 , , 1 , interpreted as building a matrix by aligning those matrices horizontally from left to right.
The next theorem gives conditions under which a multivariable linear system can be transformed into a block controller canonical form. This structure simplifies control design and analysis. The transformation is possible if the system order is divisible by the input dimension and the system is block controllable of the corresponding index [28,29,30].
Theorem 4
(Malika Yaici and B. Bekhiti [6,31]). The multivariable control system described in Equation (11) can be transformed into a block controller form if the following two conditions are satisfied:
= n / m  is an integer;
The system is block-controllable of index .
If both conditions are met, then the coordinate transformation  x c k = T c x k  transforms the system into the following block canonical controller form
x c k + 1 = A c x c k + B c u k       a n d         y k = C c x c k + D c u k
where  T c = c o l T c 1 A i i = 0 1 ;   T c 1 = I m r o w A i B i = 0 1 1  and
A c = T c A T c 1 = I m         I m A A 1 ;               B c = T c B = I m ;             C c = C T c 1 = C C 1
where  x c R n ,   A i R m × m ,   C i R p × m ,   i = 1 , . , , I m , and  are the  m × m  identity and null matrices, respectively, and the superscript   denotes the transpose. In this context, the operation  c o l X i i = 0 1  is the column-wise concatenation of the matrices  X i  for  i = 0,1 , , 1 , interpreted as building a matrix by aligning them vertically from the top down.
Definition 4
([26]). The system described by Equation (11) is said to be the block observable of index  if the matrix Ω o = c o l C A i i = 0 1  has full rank and = n / m  is an integer.
Theorem 5
(B. Bekhiti [31]). Consider the multivariable system described by innovation Equation (11). This system can be expressed in block observable canonical form if the two criteria are met:
The ratio  = n / p  is a positive integer;
The system satisfies block observability of index   , i.e., the matrix Ω o has full rank.
Under these conditions, the system admits a similar transformation of the form  x o k = T o x k  , where  T o  is a nonsingular transformation matrix. This yields an equivalent state-space representation in block-observable canonical form
x o k + 1 = A o x o k + B o u k       a n d         y k = C o x o k + D o u k
where  T o = r o w A i T o 1 i = 0 1 ;   T o 1 = c o l C A i i = 0 1 1 [ I p ]  and 
A o = T o 1 A T o = A I p       A 2   I P A 1       ;                 B c = T o 1 B = B   B 1 ;                 C o = C T o = I p
 where  x o R n ,   A i R p × p ,   B i R p × m ,   i = 1 , . , ,   I p , and are the p × p identity and null matrices, respectively.

2.4. Block Eigenvalues and the Jordan Normal Form

Let R i R m × m (for i   =   1 , ,   ) be a square matrices, such that the right functional evaluation k = 0 A k R i k = 0 , or alternatively, A   A 0 . c o l R i k k = 0 = 0 ; those matrices R i are called right block roots or solvents. In a compact form, we can write A c X i   =   X i R i , where X i = c o l R i k k = 0 1 . If we define the matrix V R = X 1   X 2   X , then A c V R   =   V R Λ R , with Λ R = b l k d i a g ( R 1 R ) . A matrix X R r × r is a block eigenvalue of order r of a matrix A R n × n with n = r if there exists a block eigenvector V R n × r of full rank, such that A V = V X . Moreover, if A V = V X , with V of full rank, then all the eigenvalues of X are eigenvalues of A . A matrix X has the property that any similar block is also a block eigenvalue, and it is clear that a block eigenvector V spans an invariant subspace of A , since being of full rank is equivalent to having linearly independent columns [6].
Definition 5
([5,7]). Let A  be a matrix and let X i i = 1 r  be a set of block eigenvalues of A  with σ X i σ X j =  . We say that this set of X i  is a complete set, if the following criteria are met:
The union of the eigenvalues of all  X i  together equals those of  A  (i.e., σ X i = σ A
Each eigenvalue appears with the same partial multiplicities in the  X i  as it does in  A
The set is complete if these blocks capture the entire spectral information of  A  without distortion.
Theorem 6
([9,10,11]). A set of block eigenvalues X 1 ,   .   .   .   ,   X r  of a matrix A  is a complete set if, and only if, there is a set of corresponding block eigenvectors V 1 ,   .   .   .   ,   V r , such that the matrix V 1 ,   .   .   .   ,   V r  is of full rank and A V 1 ,   .   .   .   ,   V r = V 1 ,   .   .   .   ,   V r . b l k d i a g X 1 ,   .   .   .   ,   X r . Moreover, if R 1 ,   .   .   .   ,   R  is a complete set of solvents of a companion matrix A c ,  then the respective block Vandermonde matrix V R = r o w c o l R i k 1 k = 1   i = 1  is nonsingular. In addition, if R 1 , , R s  is a complete set of solvents of the matrix A c  with multiplicities 1 , , s ,  then A c   =   V R J R V R 1  and the generalized block matrices V R ,   J R  are given by
V R = r o w r o w c o l k 1 j R i k j 1 k = 1   j = 0 i 1   i = 1 s ;           J R = s i = 1 R i I m R i   R i I m R i      

2.5. Solvents of Matrix Polynomials and Divisors

The matrix polynomial problem can be cast into a block eigenvalue formulation as follows. Given a matrix A of order n = m , find a matrix R of order m , such that A V = V R , where V is a matrix of full rank. This links the spectral structure of matrix polynomials to their block companion matrices, enabling analysis via standard linear algebra tools. We now introduce the key definitions and theorems [16].
Definition 6
([6,9]). Let A z = i = 0 A i z i C z m × m  be a matrix polynomial of degree .
  • The  r a n k A z  is defined as  r a n k A z = m a x r a n k A z 0   : z 0 C .
  • The matrix polynomial  A z  is said to be  m o n i c  if  A 0 = I m  , and a  c o m o n i c  if  A = I m
  • The matrix polynomial  A z  is called unimodular if  z = det A z = c C ; c 0
  • It is called regular (or nonsingular) if  r a n k A z = m , for all  z C .
  • Alternatively,  A z  is nonsingular if  det A z 0 Otherwise, it is referred to as singular.
  • The roots of the polynomial  z  are termed the eigenvalues (latent root) of  A z
  • A rational matrix  H z C z m × m  is called biproper if
lim z H z = F C m × m     w i t h     r a n k F = m
where  C z  denotes the ring of polynomials, while  C z  is the field of rational functions in  z .
Definition 7.
Let  A z  be a matrix polynomial in  z If  α C  is such that  det A α = 0 then we say that  α  is a latent root or an eigenvalue of  A z If a nonzero  v C m  is such that  A α v = 0  (vector), then we say that  v  is a (right) latent vector or a (righteigenvector of  A z corresponding to the eigenvalue  α .
If A z  has a singular leading coefficient A 0 ,  then A z  has latent roots at infinity.
Theorem 7
([30]). If λ i  is latent root of  A z  with corresponding right and left latent vectors  v i  and w i  , respectively, then λ i  is an eigenvalue of  A c  and  V = c o l λ i k v i k = 0 1 R m  is a right block-eigenvector of  A c  (similarly,  W = c o l λ i k w i k = 0 1 R m  is a left block-eigenvector of  A c ).
This correspondence between latent roots of A z and eigenpairs of A c is consistent with the following result on their complete spectral equivalence.
Theorem 8
([42]). Let A z R m × m z  be a matrix polynomial and let  A c R n × n  be the associated block companion matrix; then,  A z  and  A c  have exactly  n   =   m  finite latent roots (counting multiplicities) and  det z I A c = det A z  . Moreover, the eigenvalues and corresponding partial multiplicities are common to  A c  and  A z .
Building on this spectral equivalence, the next result characterizes the block-eigenstructure of A c and the explicit form of its similarity transformation matrix.
Lemma 1
([31]). Let A z R m × m z  be a matrix polynomial and let  A c R n × n  be the associated block companion matrix with  n = m  . Let X 1 , J 1     X s , J s  be complete set of  s  block-eigenpairs of  A c  (i.e.,   A c X i = X i J i  or equivalently  A c X 1 X s = X 1 X s J c  with  J c = b l k d i a g ( J 1 ,   ,   J s ) R n × n  ). If  V R n × n  is the similarity matrix of  A c  , that is that  V  is such that  A c =   V J c V 1  , then  V  has the form    V = r o w c o l V i J i k 1 k = 1 i = 1 s .
Having established the correspondence between the block-roots and block eigenvectors of A c , we proceed to characterize the conditions under which a Jordan pair X , J constitutes an eigenpair of the original matrix polynomial A z [8,43].
Lemma 2
([16,42]). Consider a pair X , J  where  X = X 1 X r R m × p  is of full column rank  p   J = b l k d i a g ( J 1 ,   ,   J r ) R p × p  is block-diagonal, with blocks  J i R p i × p i   ( i = 1 r p i = p )  then, the following statements are equivalent:
  • The pair  X , J  is a Jordan pair of  A z  , i.e.,  k = 0 A k X J k = 0 ;
  • Each block pair  X i , J i  is an eigenpair of  A z  , i.e.,  k = 0 A k X i J i k = 0  ,  i = 1 ,   ,   r .
This theorem simply states that the Jordan pair condition for the whole block X , J is equivalent to the condition holding block-by-block for each X i , J i . Now, we study the matrix solvent problem as a particular case of the invariant pair problem, and we apply to solvents some results we have obtained for invariant pairs [5,6,7,8,9,10,11].
Definition 8
([7,29]). Let A z  be an  m × m  matrix polynomial. A matrix  R C m × m  is called a (right) solvent for  A z  if satisfies  A R   : = A 0 R + + A 1 R   + A = 0  , while a left solvent is a matrix  L C m × m  satisfying  A L   : = L A 0 + + L A 1   + A = 0 .
The next theorem then links these solvents to the eigenstructure of the associated block companion matrix, providing a constructive characterization [6].
Corollary 1
([16,28]). Let A z C m × m z  be a matrix polynomial and let  A c R m × m  be the associated block companion matrix. If the matrix  X = X 1 X r  is a nonsingular matrix of order  m  and    J = b l k d i a g ( J 1 ,   ,   J r ) R m × m  , then  X i , J i  and  i = 1 ,     ,   r  , are eigenpairs of  A z  if, and only if,  R = X J X 1  is a solvent of  A z  . If  R 1 R  is a complete set of solvents of  A z  , then  A c = V R Λ R V R 1  with  Λ R = b l k d i a g ( R 1 ,   ,   R )  , and  V R = r o w c o l R i k 1 k = 1 i = 1 .
Now, we further describe how the d i a g o n a l i z a b i l i t y and spectral properties of the block companion matrix determine the number of solvents [30,31].
Theorem 9
([6]). Let A z  be a matrix polynomial and let  A c  be the associated block companion matrix. If  A c  is diagonalizable, then  A z  has at least   solvents. If  A c  is diagonalizable and at least one of its eigenvalues has geometric multiplicity greater than 1, then  A z  has infinitely many solvents. If  A c  has distinct eigenvalues then  A z  has at least  m  solvents, and the maximum number is  n ! / m ! n m !  . If a matrix  A c  is not diagonalizable, then the number of solvents of  A z  can be zero, finite, or infinite.
The classical study of solvents of matrix polynomials uses the theory of divisibility, from which we have that, if R 1 is a solvent of A z , then ( z I m R 1 ) is a linear divisor of A z , and hence the eigenvalues, including multiplicities, of R 1 are also eigenvalues of A z . For the explicit computations of solvents, the basic approach is the search for a matrix of the form R 1 = X J R X 1 , where J R having eigenvalues of A z is in the Jordan normal form. Mac Duffee and Gantmacher suggest the search for an arbitrary nonsingular matrix X satisfying k = 0 A k X J R k = 0 [16,42].
The connection between the eigenvalues of the matrix polynomial A z and its solvents is established in [9]. A corollary of the generalized Bézout theorem states that if R (respectively, L ) is a solvent of A z , then
A z = B R z z I R ;                                 A z = z I L B L z
where B R z and B L z are a matrix polynomials of degree 1 . Consequently, any finite eigenpair of the matrix R (respectively L ) corresponds to a finite eigenpair of the original matrix polynomial A z [10,11].
As a direct consequence, any monic matrix polynomial can be factorized into a product of linear factors A z = z I Q z I Q 1 z I Q 1 , where each Q i R m × m is referred to as a spectral factor. In this factorization, the rightmost spectral factor Q 1 is a right solvent and the leftmost spectral factor Q is a left solvent that is L = Q and R = Q 1 . It should be noted, however, that spectral factors are not necessarily solvents of A ( z ) in general. In fact, there exist matrix polynomials that admit no solvents at all [6].
Corollary 2
([7,16]). Suppose A z  has  p  distinct eigenvalues  λ i i = 1 p  with  m p m  , and that the corresponding set of p eigenvectors  v i i = 1 p  satisfies the Haar condition (every subset of  m  of them is linearly independent). Then there are at least  p ! / m ! p m !  different solvents of  A z  , and exactly this many if  p = m  , which are given by 
        R = W d i a g μ 1 μ m W 1 ;           w h e r e     W = w 1 w m C m × m         i s   i n v e r t i b l e  
where the eigenpairs  μ i , w i i = 1 m  are chosen among the eigenpairs  λ i , v i i = 1 p  of  A z .
The next theorem generalizes the construction by showing that any invertible invariant pair of full size directly yields a matrix solvent.
Theorem 10
([30,31,32,33,34,35,36,37,38,39,40,41,42]). Let A z C m × m z  be a  m × m  matrix polynomial and consider an invariant pair  X ,   Y C m × k × C k × k  of  A z  (sometimes called admissible pairs). If the matrix  X  has size  m × m  , i.e.,  k = m  , and is invertible, then  R = X Y X 1  satisfies  A R = 0  (i.e.,  R = X Y X 1  is a matrix solvent of  A z ).
Proof. 
As X ,   Y C m × k × C k × k is an invariant pair of A z , we have
  A X ,   Y   : = A 0 X Y + + A 1 X Y + A X = 0  
Since X is invertible, we can post-multiply by X 1 . Then, we obtain
A 0 X Y X 1 + + A 1 X Y X 1 + A X X 1 = 0                 A R = 0
Therefore, R = X Y X 1 is a matrix solvent of A z . □
C o p r i m e n e s s of polynomial matrices is one of the most important concepts in the MFD representation of systems since it is directly related to controllability and observability (i.e., minimal realization and avoiding internal pole–zero cancelations) [21,22].
Definition 9
([6]). Let N z  and  D z  be polynomial matrices with the same number of columns. A matrix  R z  is called a greatest common right divisor (GCRD) of them if the following are true:
  • There exist matrices  X z  and  Y z ,  such that  N z = X z R z  and  D z = Y z R z ;
  • For any other CRD  R 1 z there exists an  W z ,  such that  R z = W z R 1 z .
Greatest common left divisors (GCLDs) are defined analogously. The Bézout identity provides a constructive criterion; two polynomial matrices are right (or left) coprime i f there exist polynomial matrices satisfying the identity, a property widely exploited in controller parameterization, plant inversion, and robust feedback design [45].
Lemma 3
([9,42]). (Bézout Identity for Coprime Matrix Polynomials) The polynomial matrices A 1 z  and  A 2 z  with the same number of columns are right (left) coprime if there exist polynomial matrices  X z  and  Y z  , which are a solution of
r i g h t :     X z A 1 z + Y z A 2 z = I               a n d               l e f t :       A 1 z X z + A 2 z Y z = I            
This coprimeness criterion naturally leads to the formulation of matrix Diophantine equations, where the Bézout identity itself is a special case, forming the basis for many control synthesis methods [46].
Definition 10
([24,25]). The Diophantine equation is an equation of the right form
D F z = X z D R z + Y z N R z               a n d               D F z = D L z X z + N L z Y z
where  D R  ( D L ),  N R  ( N L and  D F  are given matrix polynomials of adequate dimensions, and  X  and  Y  are unknown matrix polynomials with minimum degrees satisfying Equation (18).
The next are important properties involved in solving the Diophantine equation.
Definition 11
([7,8,21]). A rational matrix H z C m × m z  is proper if  H  is finite, and strictly proper if  H = 0  . Equivalently, in an RMFD or LMFD representation,  H z  is strictly proper if the numerator degree is strictly less than the denominator degree. For a transfer function derived from a state-space realization, properness is guaranteed; it is strictly proper when the direct transmission matrix D  is zero.
Definition 12
([6]). Let H z C m × m z  be described in RMFD as  H z = N z D R 1 z  then  N R  and  D R  are as follows:
  • Right coprime if they only have unimodular common right divisors;
  • Right coprime if they have no common latent roots and associated latent vectors;
  • Right coprime if  N R z   D R z  has full rank  z
If  N R z  and  D R z  are right coprime, then  H z  is said to be irreducible. The same definitions can be applied for left-coprimeness for systems described in LMFD.
Theorem 11
([9,42]). Let R z  be GCRD of  D R z  and  N R z  . The Diophantine equation  X z D R z + Y z N R z = D F z  has a polynomial solution iff  D F z  is right divisible by R z ;], i.e., there exists a polynomial matrix  D F 1 z  with  D F z = D F 1 z R z  . In particular, if  D R z  and  N R z  are right coprime  ( G C R D = I )  , then the equation is solvable for every  D F z .
To solve the polynomial Diophantine equation D F ( z ) = D z X z + N z Y z without handling these equations one by one, we can write the whole set in block matrix form: T D X s t a c k + T N Y s t a c k = D F s t a c k ,  called a block Toeplitz convolution form.
T D   T N X s t a c k     Y s t a c k = D F s t a c k   D 0 m m D 0   D m m D m m m   D 0 m m D X 0 X 1 X 2     X r + N 0 m m N 0   N m m N m m m   N 0 m m N Y 0 Y 1 Y 2     Y r = D F 0 D F 1 D F 2   D F q          
This is a square linear system in block form, where each row-block corresponds to one coefficient equation for a certain power of z . Numerous authors have proposed various methods and conditions for solving the Diophantine equation (see [24,25]). More recently, Kučera [22] investigated proper and strictly proper solutions under broader assumptions. Let D c z , N c z , D R z , N R z , and D F z R m × m z   be matrix polynomials with g c d D R z , N R z = I m . The matrix polynomial Diophantine equation D c z D R z + N c z N R z = D F z is algebraically equivalent to the linear Sylvester/resultant system M . X c = X F , where X c stacks the coefficients of D c z and N c z . X F stacks those of D F z , and M is built from the coefficient blocks of D R z and N R z . The solution can be obtained by any of the linearly independent search algorithm, matching coefficients from highest to lowest degree, solving sequentially for each block and propagating the results until all coefficients are determined [6].

3. Matrix Polynomials-Based MIMO Compensator Design

This section presents the design of dynamic compensator by block pole placement (i.e., full matrix polynomial assignment). By providing dynamic rather than static gains, such compensators offer greater design freedom and achieve objectives unattainable with static feedback, while keeping degree minimal. Common configurations (Figure 1) are unity feedback for sensitivity improvement, output feedback for tracking, and input–output feedback for unobservable states [22]. Using MFD, which generalize scalar rational functions to MIMO systems, the design builds a desired characteristic matrix polynomial from selected block poles and determines the compensator by solving the matrix Diophantine equation, with the minimal-degree-row solution meeting steady-state, transient, and pole-location requirements [6].
Theorem 12.
Let H 1 z R p × m z  and  H 2 z R m × p z  be two rational matrix functions.
(1)
If  det I ± H 1 H 2 0 , then  H 1 z I ± H 2 z H 1 z 1 = I ± H 1 z H 2 z 1 H 1 z
(2)
The output feedback closed-loop transfer matrix  H c l s z = H 1 z I   ±   H 2 z H 1 z 1  is proper if, and only if,  I   ±   H 2 H 1  is nonsingular;
(3)
Moreover, if  H 1 z R p × m  and  H 2 z R m × p  are not necessarily proper, then we have  det I + H 2 z H 1 z = det I + H 1 z H 2 z .
Proof. 
(1) The proof of the first part of the theorem is straight forward:
I   ±   H 1 z H 2 z 1 H 1 z = H 1 z I   ±   H 2 z H 1 z 1 I   ±   H 1 z H 2 z H 1 z I   ±   H 2 z H 1 z 1 = H 1 z
So   I   ±   H 1 z H 2 z H 1 z = H 1 z I   ±   H 2 z H 1 z
(2) Assume that we have negative output feedback configuration, as shown in Figure 1a, then u 1 z = r 1 z Y 2 z . For each sub-system we have, Y 1 z = H 1 z u 1 z and Y 2 z = H 2 z Y 1 z , which means that Y 1 z = H 1 r 1 Y 2 = H 1 z r 1 z H 2 z Y 1 z . This implies that I + H 1 z H 2 z Y 1 z = H 1 z r 1 z , and the overall transfer function is H c l s z = H 1 z I +   H 2 z H 1 z 1 ; it is called proper (or causal) i f   H c l s = c n s t . Now assume that lim z I +   H 2 z H 1 z = F and   lim z H 1 z = G ; furthermore, if det F 0 , then lim z H c l s z = G . F 1 , or explicitly,  lim z H c l s z = G . a d j F / det F . Therefore, a necessary and sufficient condition for to be a proper rational function is the non-singularity of F = I + H 2 H 1 .
(3) Using the first part with the determinant, we obtain
    det H 1 z I   ±   H 2 z H 1 z 1 = det I   ±   H 1 z H 2 z 1 H 1 z  
This final result leads to det I + H 2 z H 1 z = det I + H 1 z H 2 z . □
Definition 13.
Consider a proper rational matrix transfer function H z R p × m  factored as  H z = N R z D R 1 z = D L 1 z N L z  . It is assumed that the matrix polynomials  D R z  and  N R z  are right coprime and  D L z  and  N L z  are left coprime; then, the characteristic polynomial of  H z  is defined as  z = det D R z = det D L z  and the degree of  H z  is defined as  deg H z = deg det D R z   = deg det D L z ,  where  d e g .  and  d e t .  stand for the degree and the determinant.
Definition 14.
A nonsingular m t h -order polynomial matrix  D z  is said to be column-reduced if  deg det D z = i = 1 m δ c i ,  where  δ c i  is the maximum column degree, and it is said to be row-reduced if  deg det D z = i = 1 m δ r i ,  where  δ r i  is the maximum row degree of  D z .
Column- and row-reduced polynomial matrices relate directly to the degree and properness of rational matrix functions, providing numerator–denominator degree bounds essential for properness [24]. The following lemma is key in compensator design, ensuring realizable controllers with minimal degree.
Lemma 4
([8]). If D z  is column-reduced, then  H z = N z D 1 z  is strictly proper (proper) if, and only if, each column of  N z  has a degree less than (less than or equal to) the degree of the corresponding column of  D z .

3.1. Unity Feedback Compensators

Many applied mathematicians have agreed on some arrangements and rules that facilitate mathematical work. Among these, we have to mention that in the field of control, if the matrix transfer function of the plant in the LMFD, then the controller should be designed in RMFD and vice versa (Figure 2 shows this illustration) [28,31].
RMFD Plant: Consider the unity feedback system in Figure 2a. For a p × m system H z = N R z D R 1 z described by a RMFD, the m × p compensator C z will be described by a proper rational LMFD C z = D c 1 z N c z .
The closed-loop transfer matrix is given by H c l s z =   I p + H z C z 1 H z C z . Using the first part of Theorem 12, we obtain
H c l s z = I p + H z C z 1 H z C z = H z I m + C z H z 1 C z
Replacing H z = N R z D R 1 z and C z = D c 1 z N c z in H c l s z yields
  H c l s z = N R z D c z D R z + N c z N R z 1 N c z
Define the matrix polynomial D F z = D c z D R z + N c z N R z ; then, we have H c l s z = N R z D F 1 z N c z . The design problem is, given D R z , N R z , and an arbitrary D F z , determine D c z and N c z , satisfying the closed-loop compensator equation D F = D c D R + N c N R . The roots of D F z are the closed-loop poles, and its solvents are the block poles of H c l s z . Achieving arbitrary block pole placement for the feedback configurations described earlier requires solving the compensator matrix Diophantine equation, whose solvability was addressed in Theorem 11.
LMFD Plant: Consider the unity feedback system in Figure 2b. For a p × m system H z = D L 1 z N L z described by a LMFD, the m × p compensator will be described by a proper rational RMFD C z = N c z D c 1 z . Following the same steps, we obtain
  H c l s z = I + D L 1 z N L z N c z D c 1 z 1 D L 1 z N L z N c z D c 1 z       = D c z D L z D c z + N L z N c z 1 N L z N c z D c 1 z     = D c z D F z 1 N L z N c z D c 1 z                                                                      
where D F z = D L z D c z + N L z N c z is the “left” Diophantine equation. If we replace N L N c = D F D L D c in H c l s z , we obtain H c l s z =   I D c z D F z 1 D L z . Like before, the poles of D F z are the poles of the closed-loop system, and the compensator is determined by solving the Diophantine equation.
To solve the matrix Diophantine equation, the functional form is converted into an equivalent algebraic form, known as the Sylvester matrix equation, which is more suitable for a computational solution [22,25]. Appliedmath 05 00139 i001If the subscript r stands for the degree of compensator and is the degree of open loop transfer function H z , then the closed-loop degree is q = + r .
Remark. 
The existence of a MIMO controller using this procedure depends on the solvability of the last rectangular matrix equation [6].

3.2. Output Feedback Configuration

A dynamic compensator acts as an observer with linear feedback of the estimated state, producing dynamic output feedback. Since this affects only the controllable and observable part of the system, coprime MFDs are used to represent it [30]. This section introduces the output feedback configuration, as shown in Figure 3.
RMFD Plant: For a system described in RMFD H z = N R z D R 1 z and the compensator described in LMFD C z = D c 1 z N c z , the closed-loop transfer function of the configuration of Figure 3a is given by H c l s z   =   H z I   +   C z H z 1 , where H z and C z have polynomial coefficients with the following dimensions: D R i , D c i R m × m , N R i R p × m , and N c i R m × p . So, the closed-loop transfer function is
H c l s z = N R z D R 1 z I + D c 1 z N c z N R z D R 1 z 1         = N R z D c z D R z + N c z N R z 1 D c z
If we let D F = D c D R + N c N R be the right Diophantine equation, then the closed-loop transfer function will be H c l s z = N R z D F z 1 D c z and the poles of D F z are the poles of H c l s z , and the compensator is fully determined by solving D F = D c D R + N c N R .
LMFD Plant: For a system described by LMFD H z = D L 1 z N L z , the compensator is described by a RMFD C z = N c z D c 1 z , and the closed-loop system of Figure 3b will be rewritten as H c l s z =   I   + H z C z 1   H z , where the matrix parameters D L i , D c i are p × p real matrices, N L i are p × m real matrices, and N c i are m × p real matrices. So, the closed-loop transfer function is
H c l s z = I + D L 1 z N L z N c z D c 1 z 1 D L 1 z N L z           = D c z   D L z D c z + N L z N c z 1 N L z
Let D F z = D L z D c z + N L z N c z be the left Diophantine equation; then, H c l s z = D c z   D F z 1 N L z . The poles of the closed-loop system are fully defined by the poles of D F z and the compensator can be determined.

3.3. Input–Output Feedback Configuration

There are many possible two-DOF configurations. Here, a compensator is placed on the feedback path and another takes its inputs from the references (Figure 4); sometimes we call it two-parameter configuration or input–output feedback configuration (it seems to be more natural and more suitable for practical applications) [6,9].
RMFD Plant: Consider the input–output feedback system shown in Figure 4a. The plant is described by a p × m proper rational matrix H z = N R z D R 1 z . The compensators are denoted by the m × m proper rational matrix C 0 z = D c 1 z L z and m × p rational matrix C 1 z = D c 1 z M z . The closed-loop transfer matrix can be computed as H c l s z =   H z I   + C 0 z + C 1 z H z 1 . If we replace H z , C 0 z and C 1 z into H c l s z , then the closed-loop transfer function will be
    H c l s z = N R z D R 1 z I + D c 1 z L z + D c 1 z M z N R z D R 1 z 1    
where D R i , D c i , and L i are m × m real matrices, N R i are p × m real matrices, and N c i are m × p real matrices and M i are m × p real matrices. An appropriate arrangement gives H c l s z = N R z D c z D R z + L z D R z   + M z N R z 1 D c z . Let us define the following matrix equation: D F z = D c z D R z + L z D R z   + M z N R z . The poles of the closed-loop system are fully defined by the poles of D F z .
Let us define the error E z = D F z D c z D R z = L z D R z + M z N R z ; then, the solution of the second part L z D R z + M z N R z will determine the compensator numerators. If D c z and D F z are known, then the resolution of the Diophantine equation will determine the numerators of the two compensators M z and L z . So, the closed-loop system will be H c l s z = N R z D F z 1 D c z .
LMFD Plant: Let us have the same feedback configuration as in Figure 4b. If H z = D L 1 z N L z is in LMFD and the two compensators are in RMFD, C 0 z = L z D c 1 z and C 1 z = M z D c 1 z . Then, the following development is obtained:
H c l s z = D c z D L z D c z + D L z L z + N L z M z 1 N L z
Let us define D F z = D L z D c z + D L z L z + N L z M z as the desired closed-loop denominator and let E z = D F z D L z D c z = D L z L z + N L z M z . As before H c l s z = D c z D F z 1 N L z , the second part of the equation is the compensator equation, which can be solved if the compensator denominator is fixed arbitrarily.
Pre-compensators: Pre-compensators place zeros and reduce open-loop interactions in multivariable systems. While static designs are simple, dynamic pre-compensators often provide superior performance. Shamgah proposed a QP-based method to achieve diagonal dominance for decoupling, and Basilio designed a pre-compensator to minimize the eigenvector matrix condition number and improve normality for effective use of the characteristic locus method [6,21,22].
Figure 5 presents an example of a feedback configuration with a dynamic pre-compensator to place eventual desired zeros, where H p z is m × p , H z is p × m , and C z is m × p . If H z is in RMFD so H p z is in LMFD, then the closed-loop system will be as follows: H c l s z = N R z D F z 1 D c z H p z , with D F z = D c D R + N c N R . If H z is in LMFD so C z is in RMFD, then the closed-loop system will be as follows: H c l s z = H p z D c z D F z 1 N L z , with D F z =   D L D c   + N L N c .
Remark. 
From the last obtained results, we conclude that the poles of  D F z are the poles of the closed-loop system and the zeros of the closed-loop system will be constituted by the zeros of the plant, the poles of the compensator denominator, and the zeros of the pre-compensator, which we can chose in order to achieve a certain goal.
Proposition 1.
From the desired block poles, we can construct the denominator of the closed-loop system. Given a set of  right desired block-roots  R d i R m × m  for  i = 1 , ,  , a right matrix polynomial  D F z = I m s + i = 0 1 D F i z i  can be formed using the following relation:  D F D F 2   D F 1 = R d 1   R d 2 R d . V R 1  or equivalently  D F D F 1 = r o w R d i i = 1 . V R 1 ,  where  V R  is the right-block Vandermonde matrix. Also, we can construct the  D F z  from a set of left solvents using the following equation:  D F D F 1 = V L 1 . c o l L d i i = 1 ,  where  V L  is a left-block Vandermonde matrix,  V L = c o l r o w L d i k 1 k = 1 i = 1 .

4. Model-Approximation Theory and MIMO Identification Algorithms

System identification offers an approximate model often sufficient for control objectives. To this end, we review key MIMO identification algorithms in [1,2,4,12]. While conventional MIMO least squares (MLS) methods are widely used in literatures, their sensitivity to output noise motivates the use of two-step instrumental variable (IV) estimators based on auxiliary models [1]. Additional bias-reduction strategies include weighted least squares (WLS) using modulating functions and bias-compensated least squares (BCLS) with the GPMF approach [14]. For detailed insights into ARX model estimation techniques, see [4].

4.1. MIMO Least Squares

The least squares method can be seen as a data fitting method. It was developed by Gauss and first applied in celestial mechanics, e.g., to predict the future trajectory of the asteroid Ceres. The idea of the method is as follows. There is a number N of equations involving a number n of unknowns (or parameters) which are organized in the vector θ . The least squares solution tries to minimize the error by minimizing the sum of squared errors. The method is especially elegant if the equations are linear in the parameters θ . A MIMO ARMAX (autoregressive moving average with exogenous excitation) mode [3]
        A z 1 y k = B z 1 u k + C z 1 e k ;           z 1   i s     t h e     b a c k w a r d     s h i f t     o p e r a t o r
This expression can alternatively be reformulated using the (LMFD) as
y k = A z 1 1 B z 1 u k + A z 1 1 C z 1 e k
where u k R m and y k R p are input and output vectors of the system, respectively, while e [ k ] R p is a white-noise signal and the polynomial matrices A z 1 , B z 1 and C z 1 have the following structure: A z 1   =   I p + A 1 z 1 + A 2 z 2 + . . . + A n a z n a , B z 1 = B 1 z 1 + . . . + B n b z n b and C z 1 = I p + C 1 z 1 + . . . + C n c z n c . The identification task focuses on estimating the matrix coefficients A i R p × p and B i R p × m that define the matrix polynomials A z 1 and B z 1 , under the assumption that C z 1 = I p , the identity matrix [4,12,13]. By transposing Equation (19) and the expansion of A z 1 and B z 1 , we obtain the following expression e [ k ] = y k φ k θ ; that is
e [ k ] =   y [ k ] + y [ k 1 ] A 1 + + y [ k n a ] A n a u [ k 1 ] B 1 + + u [ k n b ] B n b       =   y [ k ] y [ k 1 ] y [ k n a ] , u [ k 1 ] u [ k n b ] A 1 A n a , B 1 B n b
where the parameter matrix θ and the regression vector φ k are defined as θ = A 1 A n a , B 1 B n b , φ k = y k 1 y k n a , u k 1 u k n b . So, the least squares estimate is
θ ^ l s = ϕ ϕ 1 ϕ Y         w i t h     ϕ = ϕ y         ϕ u  
Y = y [ n + 1 , : ] y [ M , : ] ;         ϕ y = y n , : y M 1 , : y n n a + 1 , : y M n a , : ;   ϕ u = u n , : u M 1 , : u n n b + 1 , : u M n b , :
where n = n a and M is the number of I / O data. To avoid a large space memory and the large dimension matrix inversion taken by the simple least squares, a MIMO recursive least square algorithm can be handled and elaborated to be used in digital software preserving the memory space; see [1,2,3,4,17].

4.2. MIMO Recursive Least Squares

In the next subsection, we present our MIMO system-identification attempts using LMFD, with a recursive MIMO least squares implementation given as Algorithm 1:
Algorithm 1: Matrix Polynomial Recursive Least Squares
1 A I n i t i a l i z e θ t o z e r o a n d l e t P = c × I W h e r e c i s a c o n s t a n t
2 B F o r k = n a : N 1
3 ψ = y [ k , : ] . . . y [ k n a + 1 , : ] , u [ k , : ]   . . . u [ k n b + 1 , : ]
4 G = P ψ 1 + ψ P ψ 1
5 θ = θ + G y k + 1 , : ψ θ  
6 P = I G ψ P  
7 E n d
Proof. 
In many practical cases, it is necessary that parameter estimation takes place concurrently with the system’s operation. This parameter estimation problem is called on-line identification and its methodology usually leads to a recursive procedure for every new measurement (or data entry). For this reason, it is also called recursive identification. In the previous sections we have seen that the ordinary least square is given by
θ k = ϕ ϕ 1 ϕ Y = i = 1 k ψ i ψ i 1 i = 1 k ψ i y i
If we define P k = i = 1 k ψ i ψ i 1 , then we obtain θ k = P k . i = 1 k ψ i y i and P k + 1 1 = i = 1 k + 1 ψ i ψ i = P k 1 + ψ k + 1 ψ k + 1 ; therefore,
θ k + 1 = P k + 1 P k 1 θ k + ψ k + 1 y k + 1 = P k + 1 P k 1 θ k + P k + 1 ψ k + 1 y k + 1
or, equivalently, θ k + 1 = P k + 1 P k + 1 1 ψ k + 1 ψ k + 1 θ k + P k + 1 ψ k + 1 y k + 1 .
Now we can summarize this development into the following:
θ k + 1 = θ k + P k + 1 ψ k + 1 y k + 1 ψ k + 1 θ k P k + 1 = P k 1 + ψ k + 1 ψ k + 1 1                                                              
To avoid matrix inversion, we use A + B C D 1 = A 1 A 1 B C 1 + D A 1 B 1 D A 1 , and we define the change in variables A = P k 1 , B = ψ k + 1 , C = 1 and D = ψ k + 1 ; we obtain P k + 1 = P k P k ψ k + 1 ψ k + 1 P k / 1 + ψ k + 1 P k ψ k + 1 . If we define the following G = P k + 1 ψ k + 1 , and by using the previous result, we obtain
  G = P k + 1 ψ k + 1 = P k P k ψ k + 1 ψ k + 1 P k ψ k + 1 1 + ψ k + 1 P k ψ k + 1 = P k ψ k + 1 1 + ψ k + 1 P k ψ k + 1  
In particular, applications errors occurring at different time instants may have varying importance. For instance, older errors are often considered less significant. To account for this, we introduce a weighted error vector, ε W = W ε = W Y ϕ θ , where W is a weighting matrix, typically chosen as diagonal. The associated weighted LS criterion is J ε = ε W ε W = Y ϕ θ W W Y ϕ θ . Minimizing J ε with respect to θ yields θ = ϕ W W ϕ 1 ϕ W W Y . We denote Q = W W = d i a g λ N 1 , , λ , λ 0 , with 0.9 < λ < 0.99 , and λ is called the forgetting factor. The recursive version of this algorithm is P k + 1 = λ 1 P k P k ψ k + 1 ψ k + 1 P k / λ + ψ k + 1 P k ψ k + 1 .
A simulation experiment has been performed for signal-to-noise ratio equal to 20 db for both outputs; the next example shows the results. □
Example 1.
Consider the next dynamical system with the following matrices:
A z 1 = 1 0 0 1 + 0.5 0.4 0.3 0.6 z 1 + 0.1 0.3       0.2       0.3 z 1 ;               B ( z 1 ) = 0.1 0.9       0.4       0.5 z 1
A PRBS data sequence of length  N = 1000  is used to excite the system. A simulation experiment has been performed for signal to noise ratio equal to 20db for both outputs. The results of using the next algorithm is as follows:
P = 1000 × I ;                                                                                                                                                                                                                                                                                                 F o r k = 2 : 1000 1 d o , ψ = y [ k , : ] , y [ k 2 + 1 , : ] , u [ k , : ] ,                   G = P ψ 1 + ψ P ψ 1 , θ = θ + G y k + 1 , : ψ θ ,                                               P = I G ψ T P                                        
Which are given by
θ = A 1     A 2       B 1 = 0.5015 0.4006 0.3047 0.5953                   0.0947 0.2994       0.2053       0.6957                   0.0992 0.8998 0.4027 0.5021
where the dimension of the corresponding matrices are, respectively,  dim ψ = 6 × 1 , dim G = 6 × 1 , dim P = 6 × 6 , dim θ = 6 × 2 , and dim I = 6 × 6 .
Comments:
The MIMO-RLS reduces the computation load associated with MIMO least squares by casting it in recursive form which is useful for online system identification;
This basic RLS can be improved by introducing a forgetting factor [4] in order to give more weight to the most recent data.

4.3. MIMO Maximum Likelihood

For maximum likelihood (ML) method purpose, introduce the further assumption that the noise in the model (19) is Gaussian-distributed. The ML estimates of θ is obtained by maximizing the likelihood function, i.e., the probability distribution function (PDF) of the observations conditioned on the parameter vector   θ . For the previously given A MIMO ARMAX model, Equation (19) can be developed to yield [4,12,17]
      C z 1 e k =   y k + A 1 y k 1 + . . . + A n a y k n a B 1 u k 1 + . . . + B n b u k n b  
This equation can be rewritten using the Kronecker operator ( ) as
e k = I p y k c o l I p η y , η u , η e θ
where
  η y = I p y k 1 + . . . + I p y k n a , η u = I p u k 1   . . . I p u k n b   η e = I p e k 1 . . . I p e k n c , θ   = θ A , θ B , θ C                                                                                   θ A = c o l ( A 1 ) . . . c o l ( A n a ) , θ B = c o l ( B 1 ) . . . c o l ( B n b ) , θ C = c o l ( C 1 ) . . . c o l ( C n c )        
The best estimate of the parameter θ ^ can be obtained using a numerical minimization algorithm, such as
Steepest descent method : θ k + 1 = θ k λ E Gauss Newton method : θ k + 1 = θ k E  
with
= e [ m + 1 ] θ e [ N ] θ ,   and E = e m + 1 e N
An implementation of the MIMO-ML can be written as Algorithm 2:
Algorithm 2: MIMO Maximum Likelihood (ML) Algorithm
1  Step 1: For k = m + 1 t o N
2   - Compute the prediction error e ^ k = A ^ z 1 y k B ^ z 1 u k C 1 e ^ k 1 . . . C n c e ^ k n c
3   - Compute the partial derivatives of e k :       e k / θ T . Where its elements can be computed through MIMO
4   IIR (Infinite Impulse Response) digital filtering using the updated matrix coefficients estimates C ^ i of the
5   matrix 5polynomial C ^ z 1 .
6  Step 2: Estimate the parameter vector θ using
7       S t e e p e s t d e s c e n t m e t h o d : θ k + 1 = θ k λ E  
8       G a u s s N e w t o n m e t h o d : θ k + 1 = θ k 1 E    
9       With m = n a ,   0 < λ < 1 and N is the number of Input / Output data .
10   Step 3: If no convergence, go to step1.
Example 2.
Let us consider the 2-input 2-output process (i.e.,  p = m = 2 ) described in LMFD by its polynomial matrices as
  A z 1 = 1 0 0 1 + 0.5 0.4 0.3 0.6 z 1 + 0.1 0.3 0.2 0.3 z 2 ;       B z 1 = 0.1 0.9 0.2 0.3 z 1 + 0.8 0.3 0.1 0.7 z 2   C ( z 1 ) = 1 0 0 1 + 0.7 0.2 0.3 0.9 z 1 + 0.3 0.4 0.5 0.7 z 2
The aim is to estimate the matrix polynomials  A z 1 , B z 1  and  C ( z 1 )    from I/O data contaminated by white noise. A PRBS data sequence of length  N = 1000  is used to excite the system. A simulation experiment has been performed for signal to noise ratio equal to 20 db for both outputs.
e [ k ] ( θ A )     =   e [ k ] ( c o l ( A 1 ) ) ,     e [ k ] ( c o l ( A 2 ) )     =       C ( z 1 ) 1 I p y [ k 1 ] , C ( z 1 ) 1 I p y [ k 2 ]             e [ k ] ( θ B )     = e [ k ] ( c o l ( B 1 ) ) ,     e [ k ] ( c o l ( B 2 ) )     =         C ( z 1 ) 1 I p u [ k 1 ] , C ( z 1 ) 1 I p u [ k 2 ] e [ k ] ( θ C )     =   e [ k ] ( c o l ( C 1 ) ) ,     e [ k ] ( c o l ( C 2 ) )     =     C ( z 1 ) 1 I p e [ k 1 ] , C ( z 1 ) 1 I p e [ k 2 ]    
Finally, we can form   e k / θ = e k / θ A ,     e k / θ B ,     e k / θ C ,  where its elements can be computed through MIMO IIR (Infinite Impulse Response) digital filtering using the updated matrix coefficients estimates  C ^ 1 , C ^ 2  of the matrix polynomial    C ^ z 1 = I 2 + C ^ 1 z 1 + C ^ 2 z 2  Then, using the Gauss–Newton method to update the parameter vector  θ  gives the results shown below:
  θ ^ A = 0.4932     0.4055       0.2963     0.6029     0.1022     0.2968       0.1975       0.3040 θ ^ B = 0.1056     0.8996       0.1954       0.3014     0.8017     0.2929       0.1018       0.7053     θ ^ C = 0.7225       0.1946       0.3023       0.9205       0.3202       0.4200     0.4972       0.7518                                
Comments. 
The MIMO maximum likelihood algorithm is characterized by the use of the Kronecker product and block-structured filtering through MIMO IIR digital filters, which rely on continuously updated matrix coefficient estimates. The quality of system and noise dynamics estimation can be improved by increasing the number of data samples or by enhancing the signal-to-noise ratio [1,2,3].

5. The Proposed Adaptive Compensator Design

5.1. Conversion Between Left and Right Matrix Fraction Discerptions

A widely adopted approach for describing the input–output dynamics of MIMO systems is to express the transfer function as a ratio of matrix polynomials. This framework, known as the matrix fraction description, provides a concise representation of system behavior in the z-domain. For systems modeled by vector difference equations, the transfer matrices H z 1 and F z 1 can be arranged in rational polynomial form. Because matrix multiplication is generally non-commutative, two separate but equivalent forms are used: the left/right matrix fraction description (LMFD and RMFD), as explained in [26].
y k = H z 1 u k + F z 1 e k ;               z 1   i s   t h e   b a c k w a r d   s h i f t   o p e r a t o r
The transfer function matrix H z 1 can be represented using two equivalent rational forms depending on the arrangement of the polynomial matrices:
R M F D :       H z 1 = C z 1 D z 1 1                 o r   L M F D :       H ( z 1 ) = A ( z 1 ) 1 B ( z 1 )
The matrix polynomials A z 1 , B z 1 , C z 1 and D z 1 involved in these expressions have the following general forms:
A ( z 1 ) =     I p + A 1 z 1 + . . . + A n a z n a , B ( z 1 ) =     B 0 + B 1 z 1 + . . . + B n b z n b . C ( z 1 ) =     C 0 + C 1 z 1 + . . . + C n c z n c , D ( z 1 ) =     I m + D 1 z 1 + . . . + D n d z n d .
with the dimensions A i R p × p ,   B i R p × m ,   C i R p × m and D i R m × m .
Remark 1.
It is possible to obtain either LMFD or RMFD from the other only by solving the following matrix equation:
A z 1 C z 1 = B z 1 D z 1
This last matrix equality can be expanded and rewritten in more compact form after rearrangement into  S A B S C D = S B , where  S A B  is the Silvester matrix
  S A B = I p O p O p O p O p × m O p × m O p × m O p × m A 1 I p O p B 1 O p × m O p × m A 1 I p B 1 O p × m A n a A 1 I p B n b B 1 O p × m A n a A 1 B n b B 1 O p A n a O p × m B n b O p O p O p A n a O p × m O p × m O p × m B n b ;           S C D = C 1 C 2 C n b D 1 D 2 D n a ;         S B = B 1 B 2 B n b O p × m O p × m O p × m  
and the solution vector is  S C D = S A B + S B ,  where  S A B + = S A B S A B 1 S A B

5.2. Non-Adaptive Compensator Design via the Linearly Independent Search Algorithm

Now consider the unity feedback configuration shown in Figure 6. The plant is modeled as a proper rational matrix (in RMFD) of dimension p × m , given by the transfer matrix H ( z 1 ) = C ( z 1 ) D ( z 1 ) 1 . The compensator is designed to be a proper rational matrix (in LMFD) of size m × p , given by G c ( z 1 ) = D c ( z 1 ) 1 N c ( z 1 ) .
Accordingly, the resulting closed-loop transfer matrix is given by
H c l ( z 1 ) = I + H ( z 1 ) G c ( z 1 ) 1 H ( z 1 ) G c ( z 1 )
Using the following matrix inverse identity, ( I + A B ) 1 A = A ( I + B A ) 1 , we obtain H c l z 1 = H z 1 I + G c z 1 H z 1 1 G c z 1 which can be written as
H c l ( z 1 ) = I + H ( z 1 ) G c ( z 1 ) 1 H ( z 1 ) G c ( z 1 )
where D F z 1 is the Diophantine matrix equation and is defined by the next formula
D F ( z 1 ) = D c ( z 1 ) D ( z 1 ) + N c ( z 1 ) C ( z 1 )
Thus, the design objective can be reformulated as follows; given the matrices D ( z 1 ) and C z 1 , along with a specified target matrix polynomial D F ( z 1 ) , determine the compensator matrices D c z 1 and N c ( z 1 ) that fulfill the required relation. It is important to observe that the zeros of D F z 1 define the closed-loop system poles represented by H c l ( z 1 ) , while the matrix solvents of D F z 1   represent the associated block poles of H c l ( z 1 ) . In order to achieve a desired configuration of block poles for the unity feedback scheme presented earlier, one must solve the compensator matrix Equation (27). This process requires addressing a matrix Diophantine equation. Several numerical schemes have been introduced to tackle such problems, including the approaches referenced in [24,25]. The technique adopted in this section is based on the work of Chen [21]. The core idea is to convert the problem into a system of linear algebraic equations by constructing a Sylvester matrix (or, alternatively, a generalized resultant matrix) built from the matrix polynomials B z 1   a n d   A z 1 .
The Linearly Independent Search Algorithm: Given a sequence of n-dimensional row vectors T 1 , T 2 , , T p , the proposed algorithm (Algorithm 3) constructs an n × n matrix P k iteratively for each k = 1,2 , , p .
Algorithm 3: Linearly Independent Search AlgorithmProperties of this P k :
1  A . Initialize P 0 = I n n × n identity matrix
2  B .   For k = 1,2 , . . . p do P k = P k 2 .
3  If T k P ( k 1 ) T k 0 , Then P ( k ) = P ( k 1 ) P k 1 T k P k 1 T k T k P ( k 1 ) T k P k = P k .
4  and T k is linearly independent of the previous rows P k ξ   = ξ         ξ T 1 ,   T 2 ,     ,   T k .
5  Else P k = P k 1 and T k is linearly dependent . P k T i =   0             T i T 1 ,     ,   T k .
Proof. 
Given a real matrix X R m × n whose column vectors are the following, set S = x 1 ,   x 2 ,     ,   x n . When the Gram–Schmidt algorithm is applied to the columns of X , the result is an orthogonal basis u 1 ,   u 2 ,     ,   u n for the range space R X , where
  u k = x k i = 1 k 1 u i x k u i x i u i ;           P k 1 x k = u k       w h e r e       P 0 = I       a n d         k = 1,2 ,   ,   n  
We will use the fact that
u i x k u i x i u i = x k u i u i u i u i = u i u i x k u i u i = u i u i u i u i x k
Let us expand the u k formula and obtain the corresponding projecting matrix P k , in terms of x k and the previous P k 1 .
We can prove the linearly independent search algorithm iteratively as follows:
S t e p   0 :         P 0 x 1 = u 1   =   x 1   P 0 =   I S t e p   1 :         P 1 x 2 = u 2   =   x 2 x 2 u 1 u 1 x 1 u 1 = I u 1 u 1 u 1 u 1 x 2   P 1 = P 0 P 0 x 1 P 0 x 1 P 0 x 1 P 0 x 1 S t e p   2 :         P 2 x 3 = u 3   =   x 3 x 3 u 1 u 1 x 1 u 1 x 3 u 2 u 2 x 2 u 2 = I u 1 u 1 u 1 u 1 u 2 u 2 u 2 u 2 x 3       = P 0 P 0 x 1 P 0 x 1 P 0 x 1 P 0 x 1 P 1 x 2 P 1 x 2 P 1 x 2 P 1 x 2 x 3   P 2 = P 1 P 1 x 2 P 1 x 2 P 1 x 2 P 1 x 2 S t e p   k :         P k x k + 1 = u k + 1   =   x k + 1 i = 1 k u i x k + 1 u i x i u i = P k = P k 1 P k 1 x k P k 1 x k P k 1 x k P k 1 x k
Notice that, when x k + 1 is linearly dependent with the previous one, then x k + 1 = β x k implies that u k + 1 = β u k , which can be written as P k x k + 1 = β P k 1 x k P k = P k 1 , and this will end the prove. □

5.3. Adaptation Mechanism Development

Classical controllers cannot effectively handle uncertainties in dynamic systems, as parameter variations alter operating conditions, leading to instability and degraded performance [47,48]. Adaptive control naturally addresses these issues by adjusting parameters to ensure closed-loop stability [15,23]. Two main types exist: model reference adaptive systems (MRAS) and self-tuning regulators (STRs). This work focuses on the self-tuning strategy, which relies on parametric estimation, as illustrated in Figure 7.
The novelty lies in the tight coupling between the Sylvester-based MFD transformation and the adaptive Diophantine solver, allowing exact block pole relocation under bounded parametric uncertainty (±7% in simulations) while retaining strict decoupling (<2% cross-channel influence). The proposed adaptive block pole placement scheme extends the matrix linear Diophantine equation framework into a fully online, eigenstructure-shaping control law for large-scale MIMO systems, as given in Algorithm 4.
Algorithm 4: The Adaptive Block-Pole Placement Algorithm
1Step 1:▪ Enter the values of: M , n a , n b , P = c . I
2▪ Enter the nominal values of the D i R m × m and C i R p × m
3▪ Initiate by the values of D i m × m and C i p × m
4 For   k = n a : M   do
5Step 2:▪ Enter the desired Block poles R i d m × m to be placed and construct the
6 D F z 1 . Then compose: D ^ F z 1 = D ^ c z 1 D ^ z 1 + N ^ c z 1 C ^ z 1 .
7▪ Solve the Diophantine equation using recursive search algorithm
8▪ Obtain D ^ c z 1 and N ^ c z 1
9Step 3:▪ Give the desired trajectory sequence   r k .
10▪ Compute the closed-loop output and the control law by:
11▪  y k = C ^ z 1 D ^ c z 1 D ^ z 1 + N ^ c z 1 C ^ z 1 1 N ^ c z 1 r k
12▪  u k = D ^ c z 1 N ^ c z 1 u c k and   u c k = r k y k
13Step 4:Identify the plant parameters using: MIMO-RLS or MIMO-ML algorithms
14Step 5:Updating the matrix coefficients θ A B = θ ^ A B . Convert LMFD to RMFD using
15Silvester Matrix equation  θ ^ C D = funct A z 1 , B z 1 = funct θ A B
16 Getting   C ^ i p × m   and   D ^ i m × m   and   go   to   Step : 2
The proposed method guarantees the following criteria as summarized in Table 1:
Sensitivity and Robustness Analysis: 
This adaptive mechanism merges high-precision block pole placement with online parametric learning, producing a compensator synthesis procedure that is both structurally minimal and robust to noise and disturbances, as validated in the winding process application. For a closed-loop system with transfer matrix H c l s z 1 = N R D c D + N c C 1 D c , the sensitivity and complementary sensitivity functions are given, respectively, by S z 1 = I   +   H z 1 G c z 1 1 and T z 1 = I S z 1 = H z 1 G c z 1 I + H z 1 G c z 1 1 . The robustness condition, with respect to multiplicative plant perturbations H Δ z 1 , satisfies T z 1 H Δ z 1 < 1 which, in the polynomial domain, imposes a generalized eigenvalue bound
ρ T e j ω H Δ e j ω < 1 ,               ω [ 0 , π ] ,
where ρ denotes the spectral radius. For additive plant perturbation, the small-gain robust-stability test changes from involving T to involving M z 1 = G c z 1 S z 1 . The robust stability is guaranteed if G c z 1 S z 1 H Δ z 1 < 1 equivalently, on the unit circle z = e j ω with ω [ 0 , π ] ,
μ H c l s , H Δ = sup ω [ 0 , π ] G c e j ω S e j ω H Δ e j ω < 1
The block pole placement directly shapes the denominator D F e j ω , allowing targeted reduction in the H -norm of S e j ω in selected frequency ranges, thereby enhancing disturbance rejection while preserving robustness margins. Sensitivity minimization can be formalized as
min D c , N c W 1 e j ω S e j ω
Subject to the Diophantine constraint D F = D c D + N c C , where W 1 e j ω is a frequency-weighting matrix enforcing robustness–performance trade-offs.

6. Application to Winding Process

Winding systems are prevalent in many industrial sectors, including steel rolling mills and web-processing lines for coating, paper production, and polymer film extrusion. Their main purpose is to regulate web movement to reduce friction, slippage, and deformation, which can affect product quality [14]. The considered winding process (Figure 8) is modeled by x ˙ t = A x t + B u t , y t = C x t   and its variables are; x t = x 1     x 6 is the state vector, u t = u 1   u 2   u 3 is the input vector, y t = y 1   y 2   y 3 is the output vector. The constants A R 6 × 6 , B R 6 × 3 , and C R 3 × 6 are the state-space matrices. The physical variables of the input/output vectors are as follows; u 1 is the setpoint of motor current 1, u 2 is the setpoint of motor angular speed 2, u 3 is the setpoint of motor current 3, y 1 is the web tension ( T 1 ) between motors 1 and 2, y 2 is the web tension ( T 3 ) between motors 2 and 3, and y 3 is the motor angular speed 2 ( Ω 2 ).
The state-space model is derived directly from the linearized model of the winding process, as reported in earlier studies [3,14] and adopted in this work for validation.
  A = 5.7174       0.6513       4.6614 2.2842       3.2578       4.2846                 4.88430 3.80000 12.5805 0.47720 10.3059 11.2052                 6.1395       0.8200 1.3596         3.2639 1.4336 1.4066                 9.0408       1.1555 3.0943       5.5751 1.7843 2.1079           1.7346 0.5212 0.5536 0.7958       0.2381 0.2192           9.98310       1.80720       10.7326 3.8630       8.38510       9.18760                         B = 0.043798 0.105739 0.358248 0.296437 0.932283 0.465714 0.956172 0.235368 0.154883 0.967756 0.525201 0.257882 0.312443 0.199597 0.663932 0.667207 0.266742 0.515036 ;                     C = 0.027168       0.075248 0.010228       0.019091       0.026645 0.062940         0.610518 0.714339 0.275497 0.601226       0.025436       0.792073 0.164078       0.300634       0.049733       0.163163       0.066043 0.325489        
Its corresponding RMFD model is given by y k = C z 1 D z 1 1 u k , where   D z 1 = I 3 + D 1 z 1 + D 2 z 2 ; C z 1 = C 1 z 1 + C 2 z 2 and I 3 is the 3-by-3 identity matrix and the other matrix coefficients are
D 1 = 2.2783 0.06775       0.55208       1.8518 1.66390 2.95250 1.1294 0.17974 0.18159 ,                     D 2 =       1.2801 0.067705 0.55350 1.8033 0.672180       2.87540       1.0791 0.171930 0.73911                               C 1 = 0.00429090 0.0063885 0.0128310 0.06687700 0.0087380 0.1211500 0.00077228 0.0302260 0.00060105 ,         C 2 =       0.010399 0.0027112 0.015017 0.144110 0.0208320       0.242840       0.048767 0.0216820 0.079244      
Remark. 
For simplify the control procedure we chose a first-order fixed-structure compensator  u k = D ^ c 0 + D ^ c 1 z 1 1 N ^ c 0 + N ^ c 1 z 1 r k y k   with a constant gain pre-compensator  F = l i m z 1 C ^ z 1 D F z 1 1 N ^ c z 1 1 ;  then, the desired  D F z 1  is a matrix polynomial of order three. Let us now chose thee block roots to be placed:
    R 1 = 0.0000 0.0082 0.0033 0.0306 0.0533 0.0228 0.0028 0.0041 0.0045 ;             R 2 = 0.0607 0.0201 0.0278 0.1411 0.0482 0.0718 0.0875 0.0267 0.0432 ;   R 3 = 0.0499 0.0223 0.0250 0.0156 0.0023 0.0133 0.0572 0.0285 0.0302   σ 1 R 1 = 0.046 ; 0.0053 ; 0.0065 ;         σ 2 R 2 = 0.033 ; 0.0074 ; 0.0051 ;           σ 3 R 3 = 0.0054 ; 0.010 ; 0.022          
where    σ 1 R 1 ,   σ 2 R 2 ,  and  σ 3 R 3   are the spectrum of those block roots. Hence, to reconstruct the desired matrix polynomial, we direct the reader to refer to [6,7]. Solving the Diophantine matrix equation  D F z 1 = I 3 + D F 1 z 1 + D F 2 z 2 + D F 3 z 3  yield to the linear system of equations:
  D ^ c 0 D ^ c 1 N ^ c 0 N ^ c 1 = I 3 D 1 D 2 3 3 I 3 T D 1 D 2 3 C 1 C 2 3 3 3 C 1 C 2 I 3 3 3 3 D 1 I 3 T C 1 3 D 2 D 1 C 2 C 1 3 D 2 3 C 2 1 I 3 D 1 D 2 3 3 I 3 T D 1 D 2 3 C 1 C 2 3 3 3 C 1 C 2 I 3 D F 1 D F 2 D F 3
Nominal compensator coefficients are obtained from this equation. Assuming that the system uncertainties are of 7% of the nominal one means that the transfer function is  H z 1 = H 0 z 1 + Δ H z 1 . Now, starting the adaptive block pole placement algorithm, we obtain the next results, as shown in Figure 9.
The simulation results (Figure 9) clearly indicate that the proposed algorithm successfully performs block pole placement, ensuring system stability even in the presence of abrupt parameter uncertainties, while maintaining low tracking errors. Specifically, Figure 9a shows that the system achieved block root assignment accuracy better than ±0.001, a mean steady-state error of less than 0.02, and maximum transient response time of 0.8 s under load variation. The effect of parameter variations had minimal impact on the performance of the digital compensator, which highlights the effectiveness of the adaptation mechanism. Moreover, Figure 9b shows that when the reference value of one control variable changes, the resulting interactions within the closed-loop system remain negligible, with cross-variable influence reduced to less than 2%. This implies that the system operates in a fully decoupled manner. Such decoupling is achieved because the MFD-based control structure generates coordinated actions across all control inputs as soon as any reference input changes. To evaluate the system’s capability for disturbance rejection, input pulses with amplitudes equal to one-tenth of the maximum input level were introduced, as illustrated in Figure 10. Additionally, white noise was injected at the system output to simulate measurement disturbances. Despite these perturbations, the system maintained robust regulation under 10% amplitude noise and step disturbances, effectively suppressing both transient and noise effects (see Figure 10a). Although some abrupt deviations appear in the output (see Figure 10b) due to the disturbances, the response quickly returns to its nominal state, demonstrating fast and reliable adaptive regulation.
To assess the robustness of the closed-loop transfer function, we compute the following:
  • The smallest and the largest singular values
σ m H c l s = s u p ω 0   π λ m i n Γ H c l s e j ω ;       σ M H c l s = s u p ω 0   π λ m a x Γ H c l s e j ω
where is the Γ H c l s e j ω Gram matrix: Γ H c l s e j ω = H c l s H e j ω H e j ω ;
2.
The condition number of the closed-loop transfer function
χ H c l s e j ω = σ M H c l s / σ m H c l s ;
3.
The infinity norm of the sensitivity function
S e j ω = s u p ω 0   π λ m a x S c l s H e j ω S e j ω
where S e j ω = I + H e j ω G c e j ω 1 .
Table 2 presents the robustness comparison using the extremal singular values, condition number, and H norms of the closed-loop and sensitivity functions. The adaptive block pole/Diophantine design achieves the best conditioning and smallest sensitivity peaks, consistent with the simulation results on the 3 × 3 winding process.
The adaptive block pole/Diophantine design achieves the largest minimum singular value and the smallest χ H c l s , ensuring a well-conditioned closed loop with reduced modeling sensitivity. Its H c l s remains close to unity, reflecting tight transient control, while the lowest S (≈1.15) confirms robust stability against multiplicative uncertainty and effective disturbance rejection, consistent with fast regulation, low steady-state error, and minimal cross-channel interaction. In contrast, classical schemes show larger χ H c l s and sensitivity peaks, underscoring their vulnerability to noise and bias in MIMO settings. The method of [47] improves robustness compared to traditional schemes but still falls short of the adaptive block pole/Diophantine approach, whose explicit denominator shaping and online parameter adaptation provide clear performance advantages.

7. Conclusions

This study proposed and validated a new adaptive digital control technique based on matrix Diophantine equations for block pole placement in large-scale MIMO systems. The controller integrates recursive MIMO RLS identification with an adaptive compensator to achieve precise eigenstructure assignment even under significant parametric uncertainty (up to 7%). The numerical simulations on a three-input/three-output industrial winding process demonstrated excellent tracking performance, low interaction, and robust rejection of disturbances and sensor noise.
Specifically, the system achieved the following:
  • Block root assignment accuracy better than ±0.001;
  • Mean steady-state error of less than 0.02;
  • Maximum transient response time of 0.8 s under load variation;
  • Robust regulation under 10% amplitude noise and step disturbances;
  • Complete decoupling, with cross-variable influence reduced to <2%.
While the validation has been limited to simulations, the selected case study and disturbance scenarios were designed to closely reflect industrial practice. The promising results suggest that the proposed strategy holds potential for real-world deployment in multivariable systems requiring fast and reliable online adaptation. Future work will focus on experimental implementation and comparative testing on industrial platforms to further confirm the practical viability of the approach.

Author Contributions

B.B.: conceptualization, methodology, software, formal analysis, investigation, data curation, original draft writing, visualization, and project administration. K.H. and A.K.: supervision, validation, methodology, and review and editing of the manuscript. J.A.Y.: contribution of resources, technical input, and manuscript review and editing. A.-N.S.: visualization, formal analysis, methodology, data curation, overall supervision, funding acquisition, project coordination, and critical manuscript revision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARMAXAutoregressive Moving Average with Exogenous Excitation
GCDGreatest Common Divisor
GCLDGreatest Common Left Divisor
GCRDGreatest Common Right Divisor
IIRInfinite Impulse Response
LMFDLeft Matrix Fraction Descriptions
LSLeast Squares
LTILinear Time-Invariant
MFDMatrix Fraction Descriptions
MIMOMulti-Input Multi-Output
MLMaximum Likelihood
MRASModel Reference Adaptive Systems
PEMPrediction Error Method
PRBSPseudo Random Binary Sequence
RLSRecursive Least Squares
RMFDRight Matrix Fraction Descriptions
STRSelf-Tuning Regulators

References

  1. Mohamed, A. An Optimal Instrumental Variable Identification Approach for Left Matrix Fraction Description Models. Stud. Inform. Control. 2008, 17, 361–372. [Google Scholar]
  2. Akroum, M.; Hariche, K. Extending the SRIV Identification Algorithm to MIMO LMFD Models. J. Electr. Eng. Technol. 2009, 4, 135–142. [Google Scholar] [CrossRef]
  3. Mu, B.-Q.; Chen, H.-F.; Wang, L.Y.; Yin, G. Characterization and Identification of Matrix Fraction Descriptions for LTI Systems. SIAM J. Control. Optim. 2014, 52, 3694–3721. [Google Scholar] [CrossRef]
  4. Ljung, L. System Identification: Theory for the User; Prentice Hall: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
  5. Malika, Y.; Clark, T. A Contribution to the Polynomial Eigen Problem. Int. J. Math. Comput. Nat. Phys. Eng. 2014, 8, 1131–1338. [Google Scholar] [CrossRef]
  6. Yaici, M.; Hariche, K. On eigenstructure assignment using block poles placement. Eur. J. Control. 2014, 20, 217–226. [Google Scholar] [CrossRef]
  7. Cohen, N. Spectral analysis of regular matrix polynomials. Integral Equations Oper. Theory 1983, 6, 161–183. [Google Scholar] [CrossRef]
  8. DiStefano, J.J.; Stubberud, A.R. Theory and Problems of Feedback and Control Systems; Mc. Graw Hill: New York, NY, USA, 1967. [Google Scholar]
  9. Kamel, H. Interpolation Theory in the Structural Analysis of λ-matrices. Ph. D. Thesis, Cullen College of Engineering, University of Houston, Houston, TX, USA, 1987. [Google Scholar]
  10. Singla, S.; Tronsgard, A. Interpolation Polynomials and Linear Algebra. C. R. Math. Rep. Acad. Sci. Canada 2022, 44, 33–49. [Google Scholar]
  11. Hariche, K.; Denman, E.D. On Solvents and Lagrange Interpolating-Matrices. Appl. Math. Comput. 1988, 25, 321–332. [Google Scholar] [CrossRef]
  12. Zhu, Y.; Backx, T. Identification of Multivariable Industrial Processes; Springer-Verlag: London, UK, 1993. [Google Scholar]
  13. Al-Muthairi, N.; Bingulac, S.; Zribi, M. Identification of discrete-time MIMO systems using a class of observable canonical-form. Iee Proc. Control. Theory Appl. 2002, 149, 125–130. [Google Scholar] [CrossRef]
  14. Bastogne, T.; Noura, H.; Sibille, P.; Richard, A. Multivariable identification of a winding process by subspace methods for tension control. Control. Eng. Pr. 1998, 6, 1077–1088. [Google Scholar] [CrossRef]
  15. Landau, I.D.; Lozano, R.; Mohammed, M.; Karimi, A. Adaptive Control: Algorithms, Analysis and Applications; Springer-Verlag: London, UK, 2011. [Google Scholar]
  16. Pereira, E. On solvents of matrix polynomials. Appl. Numer. Math. 2003, 47, 197–208. [Google Scholar] [CrossRef]
  17. Ljung, L. Theory and Practice of Recursive Identification; MIT press: Cambridge, MA, USA; London, UK, 1987. [Google Scholar]
  18. Ahn, S. Stability of a matrix polynomial in discrete systems. IEEE Trans. Autom. Control. 1982, 27, 1122–1124. [Google Scholar] [CrossRef]
  19. Moore, B. On the flexibility offered by state feedback in multivariable systems beyond closed loop eigenvalue assignment. IEEE Trans. Autom. Control. 1976, 21, 689–692. [Google Scholar] [CrossRef]
  20. Wonham, W. On pole assignment in multi-input controllable linear systems. IEEE Trans. Autom. Control. 1967, 12, 660–665. [Google Scholar] [CrossRef]
  21. Chen, C.T. Linear System Theory and Design; Holt, Reinhart and Winston: New York, NY, USA, 1984. [Google Scholar]
  22. Kucera, V. Discrete Linear Control: The Polynomial Equation Approach; John Wiley: Hoboken, NJ, USA, 1979. [Google Scholar]
  23. Ioannou Petros, A. Robust Adaptive Control: Design, Analysis and Robustness Bounds; PTR Prentice-Hall: Upper Saddle River, NJ, USA, 1996. [Google Scholar]
  24. Fang, C.-H. A simple approach to solving the Diophantine equation. IEEE Trans. Autom. Control. 1992, 37, 152–155. [Google Scholar] [CrossRef]
  25. Fan, C.H.; Chang, F.R. A novel approach for solving Diophantine equations. IEEE Trans. Circuits Syst. 1990, 37, 1455–1457. [Google Scholar] [CrossRef]
  26. Bekhiti, B. The Left and Right Block Pole Placement Comparison Study: Application to Flight Dynamics. Inform. Eng. Int. J. (IEIJ) 2016, 4, 41–62. [Google Scholar] [CrossRef]
  27. Zaitsev, V. On arbitrary matrix coefficient assignment for the characteristic matrix polynomial of block matrix linear control systems. Vestnik Udmurt. Univ. Mat. Mekhanika. Komp’yuternye Nauk. 2024, 34, 339–358. [Google Scholar] [CrossRef]
  28. Bekhiti, B.; Dahimene, A.; Nail, B.; Hariche, K. On λ-matrices and their applications in MIMO control systems design. Int. J. Model. Identif. Control. 2018, 29, 281–294. [Google Scholar] [CrossRef]
  29. Yu, P.; Zhang, G. Eigenstructure assignment for polynomial matrix systems ensuring normalization and impulse elimination. Math. Found. Comput. 2019, 2, 251–266. [Google Scholar] [CrossRef]
  30. Belkacem, B. On the theory of λ-matrices based MIMO control system design. Control. Cybern. 2015, 44, 421–443. [Google Scholar]
  31. Bekhiti, B.; Hariche, K. On Block Roots of Matrix Polynomials Based MIMO Control System Design. In Proceedings of the 4th IEEE Interbational Conference on Electrical Engineering (ICEE), Boumerdes, Algeria, 13–15 December 2015. [Google Scholar] [CrossRef]
  32. Nehorai, A. Recursive identification algorithms for right matrix fraction description models. IEEE Trans. Autom. Control 1984, 29. [Google Scholar] [CrossRef]
  33. Bekhiti, B.; Iqbal, J.; Hariche, K.; Fragulis, G.F. Neural Adaptive Nonlinear MIMO Control for Bipedal Walking Robot Locomotion in Hazardous and Complex Task Applications. Robotics 2025, 14, 84. [Google Scholar] [CrossRef]
  34. Bekhiti, B.; Nail, B.; Tibermacine, I.E.; Salim, R. On Hyper-Stability Theory Based Multivariable Nonlinear Adaptive Control: Experimental Validation on Induction Motors. IET Electr. Power Appl. 2025, 19, e70035. [Google Scholar] [CrossRef]
  35. Bekhiti, B. A Novel Three-Dimensional Sliding Pursuit Guidance and Control of Surface-to-Air Missiles. Technologies 2025, 13, 171. [Google Scholar] [CrossRef]
  36. Sugimoto, K.; Imahayashi, W. Left-right Polynomial Matrix Factorization for MIMO Pole/Zero Cancellation with Application to FEL. Trans. Inst. Syst. Control. Inf. Eng. 2019, 32, 32–38. [Google Scholar] [CrossRef]
  37. Tan, L.; Guo, X.; Deng, M.; Chen, J. On the adaptive deterministic block Kaczmarz method with momentum for solving large-scale consistent linear systems. J. Comput. Appl. Math. 2024, 457, 116328. [Google Scholar] [CrossRef]
  38. Chaouech, L.; Soltani, M.; Telmoudi, A.J.; Chaari, A. Design of a robust optimal sliding mode controller with pole placement and disturbance rejection based on scalar sign. Int. J. Dyn. Control. 2025, 13, 236. [Google Scholar] [CrossRef]
  39. Brizuela-Mendoza, J.A.; Mixteco-Sánchez, J.C.; López-Osorio, M.A.; Ortiz-Torres, G.; Sorcia-Vázquez, F.D.J.; Lozoya-Ponce, R.E.; Ramos-Martínez, M.B.; Pérez-Vidal, A.F.; Morales, J.Y.R.; Guzmán-Valdivia, C.H.; et al. On the State-Feedback Controller Design for Polynomial Linear Parameter-Varying Systems with Pole Placement within Linear Matrix Inequality Regions. Mathematics 2023, 11, 4696. [Google Scholar] [CrossRef]
  40. Tymerski, R. Optimizing Pole Placement Strategies for a Higher-Order DC-DC Buck Converter: A Comprehensive Evaluation. J. Power Energy Eng. 2025, 13, 47–69. [Google Scholar] [CrossRef]
  41. Nema, S. Pole-Placement and Different PID Controller Structures Comparative Analysis for a DC Motor Optimal Performance. In Proceedings of the 2024 21st Learning and Technology Conference, Jeddah, Saudi Arabia, 15 January 2024. [Google Scholar] [CrossRef]
  42. Gohberg, I.; Lancaster, P.; Rodman, L. Matrix Polynomials; Classics in Applied Mathematics; Society for Industrial and Applied Mathematics: Lancaster, PA, USA, 2009; Volume 58. [Google Scholar]
  43. Bai, Z.Z.; Pan, J.Y. Matrix Analysis and Computations; SIAM: Philadelphia, PA, USA, 2021. [Google Scholar]
  44. Higham, N.J. Functions of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  45. Bekhiti, B.; Fragulis, G.F.; Maraslidis, G.S.; Hariche, K.; Cherifi, K. A Novel Recursive Algorithm for Inverting Matrix Polynomials via a Generalized Leverrier–Faddeev Scheme: Application to FEM Modeling of Wing Vibrations in a 4th-Generation Fighter Aircraft. Mathematics 2025, 13, 2101. [Google Scholar] [CrossRef]
  46. Tian, Y.; Xia, C. On the Low-Degree Solution of the Sylvester Matrix Polynomial Equation. J. Math. 2021, 2021, 1–4. [Google Scholar] [CrossRef]
  47. Zaitsev, V. Arbitrary Coefficient Assignment by Static Output Feedback for Linear Differential Equations with Non-Commensurate Lumped and Distributed Delays. Mathematics 2021, 9, 2158. [Google Scholar] [CrossRef]
  48. Sugimoto, K.; Han, X.; Imahayashi, W. Stability of MIMO Feedback Error Learning Control under a Strictly Positive Real Condition. IFAC PapersOnLine 2018, 51, 168–174. [Google Scholar] [CrossRef]
Figure 1. Various compensator structures for MFD of MIMO systems.
Figure 1. Various compensator structures for MFD of MIMO systems.
Appliedmath 05 00139 g001
Figure 2. Unity feedback configurations for MIMO systems described by RMFD and LMFD.
Figure 2. Unity feedback configurations for MIMO systems described by RMFD and LMFD.
Appliedmath 05 00139 g002
Figure 3. Output feedback configurations for MIMO systems described by RMFD-LMFDs.
Figure 3. Output feedback configurations for MIMO systems described by RMFD-LMFDs.
Appliedmath 05 00139 g003
Figure 4. Input–output (two-DOF) feedback configuration for MIMO systems.
Figure 4. Input–output (two-DOF) feedback configuration for MIMO systems.
Appliedmath 05 00139 g004
Figure 5. Closed-loop denominator shaping via pre-compensator placement.
Figure 5. Closed-loop denominator shaping via pre-compensator placement.
Appliedmath 05 00139 g005
Figure 6. Right matrix fraction descriptions (RMFD) compensator structure.
Figure 6. Right matrix fraction descriptions (RMFD) compensator structure.
Appliedmath 05 00139 g006
Figure 7. Indirect adaptive control structure (denominator shaping via block poles).
Figure 7. Indirect adaptive control structure (denominator shaping via block poles).
Appliedmath 05 00139 g007
Figure 8. Winding process showing sequential layering and alignment in prototype assembly.
Figure 8. Winding process showing sequential layering and alignment in prototype assembly.
Appliedmath 05 00139 g008
Figure 9. Adaptive trajectory tracking control. Sub-figure (a) denotes the responses of the tracking problem. Sub-figure (b) represents the dynamic error signals.
Figure 9. Adaptive trajectory tracking control. Sub-figure (a) denotes the responses of the tracking problem. Sub-figure (b) represents the dynamic error signals.
Appliedmath 05 00139 g009
Figure 10. Digital adaptive block pole assignment with perturbation rejection. Sub-figure (a) denotes the trajectory tracking. Sub-figure (b) represents the dynamic error signals.
Figure 10. Digital adaptive block pole assignment with perturbation rejection. Sub-figure (a) denotes the trajectory tracking. Sub-figure (b) represents the dynamic error signals.
Appliedmath 05 00139 g010
Table 1. Performance guarantees of the proposed adaptive block pole placement method.
Table 1. Performance guarantees of the proposed adaptive block pole placement method.
✓ Eigenvalue convergence: max i λ i D F z λ i c l o s e d z 10 3
✓ Bounded tracking error: lim k s u p e k 0.02
✓ Stability preservation:All poles remain λ i < 1 during adaptation.
Table 2. Comparative robustness metrics (discrete time, evaluated on e j ω , ω [ 0 , π ] ).
Table 2. Comparative robustness metrics (discrete time, evaluated on e j ω , ω [ 0 , π ] ).
Method of [30] Method of [39]Method of [47]The Proposed Method
σ m H c l s 0.280.400.520.62
σ M H c l s 2.902.101.801.35
χ H c l s e j ω 10.45.33.52.2
S e j ω 3.102.201.901.40
μ H c l s , H Δ 0.950.840.730.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bekhiti, B.; Hariche, K.; Kouzou, A.; Younis, J.A.; Sharkawy, A.-N. On Matrix Linear Diophantine Equation-Based Digital-Adaptive Block Pole Placement Control for Multivariable Large-Scale Linear Process. AppliedMath 2025, 5, 139. https://doi.org/10.3390/appliedmath5040139

AMA Style

Bekhiti B, Hariche K, Kouzou A, Younis JA, Sharkawy A-N. On Matrix Linear Diophantine Equation-Based Digital-Adaptive Block Pole Placement Control for Multivariable Large-Scale Linear Process. AppliedMath. 2025; 5(4):139. https://doi.org/10.3390/appliedmath5040139

Chicago/Turabian Style

Bekhiti, Belkacem, Kamel Hariche, Abdellah Kouzou, Jihad A. Younis, and Abdel-Nasser Sharkawy. 2025. "On Matrix Linear Diophantine Equation-Based Digital-Adaptive Block Pole Placement Control for Multivariable Large-Scale Linear Process" AppliedMath 5, no. 4: 139. https://doi.org/10.3390/appliedmath5040139

APA Style

Bekhiti, B., Hariche, K., Kouzou, A., Younis, J. A., & Sharkawy, A.-N. (2025). On Matrix Linear Diophantine Equation-Based Digital-Adaptive Block Pole Placement Control for Multivariable Large-Scale Linear Process. AppliedMath, 5(4), 139. https://doi.org/10.3390/appliedmath5040139

Article Metrics

Back to TopTop