*Article* **A Theoretical Dynamical Noninteracting Model for General Manipulation Systems Using Axiomatic Geometric Structures**

**Paolo Mercorelli**

Institute of Product and Process Innovation, Leuphana University of Lueneburg, Universitaetsallee 1, D-21335 Lueneburg, Germany; mercorelli@uni.leuphana.de; Tel.: +49-4131-677-1896

**Abstract:** This paper presents a new theoretical approach to the study of robotics manipulators dynamics. It is based on the well-known geometric approach to system dynamics, according to which some axiomatic definitions of geometric structures concerning invariant subspaces are used. In such a framework, certain typical problems in robotics are mathematically formalised and analysed in axiomatic form. The outcomes are sufficiently general that it is possible to discuss the structural properties of robotic manipulation. A generalized theoretical linear model is used, and a thorough analysis is made. The noninteracting nature of this model is also proven through a specific theorem.

**Keywords:** manipulation system; geometric approach; noninteraction

**MSC:** 19L64; 70Q05; 14L24

#### **1. Introduction**

To briefly describe the history of robotics up to the present day is not a superfluous task. It is curious to first search for the original meaning of the word. Some philologists propose that the term "robot" came from the Latin root of the word "robor-roboris", one of the meanings of which is "force". In any case, the term "robot" was introduced for the first time in 1921 by the Czech writer Karel Capek in his satirical work entitled "Rossum's Universal Robots"; in Czech, "robota" means "work". Some consider the source to be Indo-European, so it might be useful, in this endeavor, to track down various corruptions, such as "labor-laboris", and hence "work". Capek's satirical work emphasizes the difference between the machine and the human and, in particular, the substantial difference consists in the fact that robots never get tired.

After World War II, the need to manipulate radioactive material generated the need to build the first mechanical manipulators that are remotely controlled. They were made in the laboratories of Argonne and Oack Ridge (USA), and were of the master–slave type, which are manipulators consisting of a "slave part" driven by the human operator whose movements were duplicated on the slave part through a series of mechanical linkages. General Electric, together with General Mills, called these teleoperators. However, the teleoperators were certainly not the only expression of robotics in the years following World War II; the CNC-machines (Computerized Numerical Control machines), initially used for the lamination of some parts of the aircraft, joined these. In fact, the numerical control machines had a considerable weight in history of robotics. Their great merit was to fully replace the human operator in the teleoperators. In 1954, for the first time, George Devol replaced in teleoperators the human operator with a programmable controller similar to that present in the numerical control machines, giving rise to the first real robot: "real" to emphasize the fact that a machine during the execution of tasks does not depend in any way on a human. Devol's patent rights were bought by Eledemberg Joseph, a student at Columbia University, who in 1956 built Unimation. This company in 1961 installed in the plants of General Motors the first robot that, because of its programmability, was able to perform a wide range of operations, multiplying the flexibility of the chain of assembly.

**Citation:** Mercorelli, P. A Theoretical Dynamical Noninteracting Model for General Manipulation Systems Using Axiomatic Geometric Structures. *Axioms* **2022**, *11*, 309. https:// doi.org/10.3390/axioms11070309

Academic Editor: Humberto Bustince

Received: 21 April 2022 Accepted: 13 June 2022 Published: 25 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Mechanically, the robot was with an open kinematic chain consisting of many degrees of freedom and consequently its control was not an easy task. To improve the pilotability of the robots and strengthen their capacity, in 1962, Ernst inserted the first force sensors in the structure of the mechanical robots.

The sensing robots went on in different forms: Tomovic and Bono developed a pressure sensor for taking robotics; McCarthy developed a vision system binary, etc. The entire activity research concretized in the first manipulator industrial computer control of the Cincinnati Milacron, baptized "The Tomorrow Tool *T*3 (1973). In addition, in the 1970s, the Unimation begins to produce the PUMA (Programmable Universal Machine for Assembly), which represents one of the cornerstones in the history of robotics.

During the 1980s, the research, aimed at improving performance of industrial robots, was developed and the first techniques to control the position and the stiffness of robots in feedback. In the past few years, the trend has been to build very versatile devices. An example useful for all is the Robotworld of Automatix structured into several modules, each with four degrees of freedom, which can be connected to different tools for the execution of various operations.

The potential applications of the research in the field of robotic manipulation have been and still are the reasons driving the research. For example, think of the possibilities of their use in operating in environments hostile to man (or applications in space exploration in central nuclear, removal of toxic waste), or the robotization of work notoriously difficult and/or dangerous to humans, or, finally, medical applications (robotic protheses, using robotic surgery).

Technological development has not only increased the use of robots in industrial fields, but has also made it possible for them to actually use different applications, such as medical implants instruments for non-invasive surgery. These applications require high-precision performance and often also a high execution speed. In general, therefore, studies highlighting the extent to which the potentialities and prospects of such devices can be improved are required. Robotic manipulation systems are of a great importance due to their flexibility for application in any industrial sector. Their flexibility is a result of the multifunctionality of robotic hands, which allows for their application to industrial processes in many fields and for the possibility of interactions and cooperation with other robotic structures. This is connected with the fact that manipulation skills are, together with speech, probably the most important features that distinguish humans from animals. A certain evolutionary biology believes that a certain part of human supremacy over other primates is also due to the prensility of our upper limbs that allowed the immediate application of ideas in what is usually called actions, a process otherwise definitely more complicated. In other words, one might wonder what would be without prensility of the upper limbs. This seems to be unimaginable, to have a reality different from what we have today. Due to this consideration, this can seem elementary, and leads us to think about how important it is that a machine performs a certain function, or better, a certain action. In this sense, to be able to affect the environment, a manipulator needs more versatile hands. As a result of technological development, the application of robotics is on the rise in many industrial sectors, and even in the medical field (e.g., micro-manipulation of internal tissues or laparoscopy). Due to the high mechanical efficiency and the vast possibilities of application of robots, in the past years the manipulation was followed with great interest in both academic and industrial world, refs. ([1–9]). For these advanced applications, robotic devices with high performances in terms of precision and speed are required. In order to achieve high performances, a general strategy in robotics is represented by the decoupling control technique.

#### *1.1. Coupling and Decoupling*

The decoupling of coupled systems is one of the most interesting problems in system theory and control. The decoupling control strategies allow us to simplify the control itself and also the identification procedure of the parameters of the robot. The couplings which are contained in the mathematical description of the robot model through the motor

inertia, the mass inertia, the stiffness and the damping matrix within the joints should be decoupled by the control. These couplings lead to an eighth-order multivariable system for each joint. The decoupling within joints is achieved by a novel MIMO state controller motor positions and output torques and their derivatives as states. In general, in order to design the controller of robots taking into account the coupling, the system is broken down into two decoupled subsystems using modal decoupling and subsequently considered as two separate single-variable systems (SISO). Thus, the parameters of the SISO state controller can be determined for the respective subsystem and two independent controllers are to be designed [10]. A globally asymptotic stability for the entire system can be achieved with the MIMO state controller. The controller significantly expands the approaches from [11], both theoretically and practically. The decoupling control represents one of the most interesting controller structures that have been implemented on robots. Multivariable systems, in which several output variables can be dependent on several input variables at the same time, are characterized by the mutual coupling of inputs and outputs. It is therefore the task of the control design for multivariable systems to minimize the influence of the coupling so that, in the ideal case, each output variable is only influenced by a corresponding virtual manipulated variable and thus the controlled system achieves the desired smooth dynamic behavior, ref. [12]. When designing a controller for a linear multivariable system, there are basically two options:


The modal method for the controller design is used to decouple the system from the controller. While central controller design is based on the overall system, decentralized controller design uses several decoupled, lower-order subsystems instead of the highorder, coupled system. In the following, the two methods are presented and analyzed one after the other. In a centralized decoupled control design, the state controller can be designed through the complete modal synthesis, whereby the closed overall system is decoupled. In the context of a decentralized decoupled control design, if a multivariable system is decoupled, the synthesis problem is reduced to the case of single control. For this, the system is first transformed into a modal form in which it can be divided into several small, decoupled subsystems. The decoupling control finds application, not only in manipulation systems, but also in other systems. One example is represented by the mobile robots. For instance, in [13], an explicit model predictive control (MPC), in combination with sliding mode control in the context of a decoupling controller, is proposed. The decoupling control is particularly important for MPC in order to reduce the computational load. In addition, recent MPC contribution takes into account the problem of computational load in the field of tracking of different trajectories mark progress in optimal design for model predictive control based on a new improved intelligent technique and it is named the modified multitracker optimization algorithm, such as, for instance, in [14]. This modification improves the exploration behavior to prevent it from becoming trapped in a local optimum. The proposed method is applied on the robotic manipulator to track trajectories. In addition, more recently, in [15], an optimized algorithm in MPC context for autonomous vehicle is proposed. More in general, the decoupling proposed in this contribution can be integrated in with the methods proposed in [16], as well as in [17,18], in which the D-decomposition method is used in order to compute optimized controller gains that provide fine performance in different engineering applications.

#### *1.2. Main Contribution of the Paper*

The present paper presents a new approach to the study of robotics manipulators dynamics based on the well-known geometric control approach to system dynamics. Using this framework several typical problems in robotics are mathematically formalised and analysed. The outcomes are sufficiently general that it is possible to discuss the structural properties of robotic manipulation, which are obtained using a geometric approach. The geometric approach was pioneered in the 1970s, in [19–22]. The approach used for

the derivation of these properties is decidedly new in this kind of literature that refers especially to [23–26]. The novelty consists in using the geometric approach (theory of invariants subspaces) analysis and then the derivation of the properties as listed above for the synthesis of control systems that guarantee and then allow to exploit these properties in any operating condition. The seminal references for this approach were [20,27]. The problem of the noninteracting force motion model is here investigated, a generalised linear model is used, and a careful analysis is performed. Contributions to the topic of manipulation using the geometric approach further progressed through the use of linear algebra. Recent contributions, such as in [28–30] have led to progress in the analysis and synthesis of geometric controllers for application to electro-mechanical systems. In [31–33], a geometric approach guarantees robustness and many practical advantages in possible real applications; see [34]. In particular, the geometric approach can be focused on the disturbance decoupling problem [35], an issue that has attracted many scientists. Furthermore, in [36–38], interesting and interpretable results are proposed. For a broad overview of the manipulation control problem, the reader is referred to [26] and the references therein. The present paper aims at analysing the structural properties of noninteraction with respect to rigid-body object motions and reachable contact forces along with possible mechanism redundancy. More recently, refs. [39–41] underline the importance of a noninteraction in the control strategy to simplify the structure of the controller. In the same way, ref. [42–44] point out the importance of the position/force control in robotic manipulation.

The present study is conducted using geometric techniques. Some axiomatic definitions of geometric structures concerning invariant subspaces are used as a possible framework in order to derive some structural properties in the considered system. This paper follows the contributions published in [31,35] and, more recently, in [32,34,45]. These studies on geometric control represent an interesting line of research in which problems such as decoupling, noninteraction and disturbance rejection are taken into account in the context of mechanical systems.

#### *1.3. Structure of this Contribution*

The present paper is structured as follows. In Section 2, the linearized dynamical model is derived. Section 3 is dedicated to the reachable internal contact forces and a fundamental theorem is demonstrated. In Section 4, the noninteraction property is presented. In Section 5, a possible reinterpretation of the theoretical results is proposed and a case study with its simulations is shown. The paper closes with a conclusion and an appendix in which the proof of the theorem that states the structural property of noninteraction is proposed.

#### **2. Dynamic Model**

For the dynamic model, **q** ∈ R *<sup>q</sup>* denotes the vector of manipulator joint positions, *<sup>τ</sup>* <sup>∈</sup> <sup>R</sup> *q* denotes the vector of joint actuator torques, **u** ∈ R *<sup>d</sup>* denotes the vector locally describing the position and the orientation of a frame attached to the object, and **w** ∈ R *<sup>d</sup>* denotes the vector of forces and torques resulting from external forces acting directly on the object. In the literature, **w** usually refers to the disturbance vector. The force/torque interaction **t***<sup>i</sup>* at the *i*-th contact is taken into account by using a lumped-parameter (**K***<sup>i</sup>* ,**B***i*) model of visco elastic phenomena. According to this model, the contact force vector **t***<sup>i</sup>* is as follows:

$$\mathbf{t}\_{l} = \mathbf{K}\_{l} \left( {}^{h}\mathbf{c}\_{l} - {}^{o}\mathbf{c}\_{l} \right) + \mathbf{B}\_{l} \left( {}^{h}\mathbf{c}\_{l} - {}^{o}\mathbf{c}\_{l} \right), \tag{1}$$

where vectors *<sup>h</sup>* **c***<sup>i</sup>* and *<sup>o</sup>* **c***<sup>i</sup>* describe the postures of two contact frames, the first on the manipulator and the second on the object, where the *i*-th contact spring and damper are anchored. Matrices **K***<sup>i</sup>* and **B***<sup>i</sup>* are symmetric and positive definite and the dimensions of vectors involved in Equation (1) depend on the particular model used to describe the contact interaction [46]. The Jacobian **J** and grasp matrix **G** of the manipulation system, see [25,47], are defined by the linear maps relating the velocities of vectors *<sup>h</sup>* **c** and *<sup>o</sup>* **c** with the joint and object velocities **q**˙ and **u**˙ , respectively:

$$\begin{aligned} \prescript{H}{}{\dot{\mathbf{c}}} &= \mathbf{J}\dot{\mathbf{q}}\_{\prime} \\ \prescript{o}{}{\dot{\mathbf{c}}} &= \mathbf{G}^{T}\dot{\mathbf{u}}.\end{aligned} \tag{2}$$

Note that **J** *T* **t** and **Gt** dually represent the effects of contact forces **t** on the manipulation and object dynamics whose full nonlinear models are, respectively:

$$\begin{aligned} \mathbf{M}\_h \ddot{\mathbf{q}} + \mathbf{Q}\_h &= -\mathbf{J}^T \mathbf{t} + \mathbf{\tau}, \\ \mathbf{M}\_\mathcal{O} \ddot{\mathbf{u}} + \mathbf{Q}\_\mathcal{o} &= \mathbf{G} \mathbf{t} + \mathbf{w}. \end{aligned} \tag{3}$$

Here, **M***<sup>h</sup>* and **M***<sup>o</sup>* are inertia symmetric and positive definite matrices, while **Q***<sup>h</sup>* and **Q***<sup>o</sup>* are terms including velocity-dependent and gravity forces of the manipulator and the object, respectively. To proceed with the analysis of the linearised model of the full manipulation system, consider a reference equilibrium configuration

$$\begin{array}{ccc} \mathbf{q} = \mathbf{q}\_{\mathcal{O}\prime} & \mathbf{u} = \mathbf{u}\_{\mathcal{O}\prime} & \dot{\mathbf{q}} = \dot{\mathbf{u}} = \mathbf{0}, \\ \mathbf{\tau} = \mathbf{\tau}\_{\mathcal{O}\prime} & \mathbf{w} = \mathbf{w}\_{\mathcal{O}} & \mathbf{t} = \mathbf{t}\_{\mathcal{O}\prime} \end{array}$$

such that

$$\boldsymbol{\pi}\_{\boldsymbol{\mathcal{O}}} = \mathbf{J}^{\mathsf{T}} \mathbf{t}\_{\boldsymbol{\mathcal{O}}} \qquad \qquad \text{and} \qquad \qquad \mathbf{w}\_{\boldsymbol{\mathcal{O}}} = -\mathbf{G} \mathbf{t}\_{\boldsymbol{\mathcal{O}}}.$$

The linear approximation of the manipulation system in the neighbourhood of this equilibrium is given by

$$
\dot{\mathbf{x}} = \mathbf{A}\mathbf{x} + \mathbf{B}\_{\mathsf{T}}\delta\mathsf{T} + \mathbf{B}\_{\mathsf{W}}\delta\mathbf{w},\tag{4}
$$

where the state and input vectors are defined as the departures from the reference equilibrium configuration as follows:

$$\begin{aligned} \mathbf{x} &= \left[ \delta \mathbf{q}^T, \delta \mathbf{u}^T, \delta \dot{\mathbf{q}}^T, \delta \dot{\mathbf{u}}^T \right]^T = \left[ \left( \mathbf{q} - \mathbf{q}\_o \right)^T \left( \mathbf{u} - \mathbf{u}\_o \right)^T \dot{\mathbf{q}}^T \dot{\mathbf{u}}^T \right]^T, \\ \delta \mathbf{\tau} &= \mathbf{\tau} - \mathbf{J}^T \mathbf{t}\_{0'} \\ \delta \mathbf{w} &= \mathbf{w} + \mathbf{G} \mathbf{t}\_0. \end{aligned} \tag{5}$$

The dynamic, input and disturbance matrices are

$$\mathbf{A} = \begin{bmatrix} \mathbf{0} & \mathbf{I} \\ \mathbf{L}\_k \ \mathbf{L}\_b \end{bmatrix}, \quad \mathbf{B}\_\mp = \begin{bmatrix} \mathbf{0} \\ \mathbf{0} \\ \mathbf{M}\_h^{-1} \\ \mathbf{0} \end{bmatrix}, \quad \mathbf{B}\_{\text{uv}} = \begin{bmatrix} \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{M}\_o^{-1} \end{bmatrix}. \tag{6}$$

To simplify the notation, we will henceforth omit the symbol *δ*. According to [47], by neglecting gravity, assuming a locally isotropic model of visco–elastic phenomena (where the stiffness matrix **K** is proportional to the damping matrix **B**), and assuming that the local variations of the Jacobian and grasp matrices are small, blocks **L***<sup>k</sup>* and **L***<sup>b</sup>* in **A** can be simply obtained as

$$\mathbf{L}\_k = -\mathbf{M}^{-1} \mathbf{P}\_{k\prime} \qquad\qquad\qquad \mathbf{L}\_b = -\mathbf{M}^{-1} \mathbf{P}\_{b\prime} \tag{7}$$

where

$$\mathbf{M} = \text{diag}(\mathbf{M}\_{h\prime}\mathbf{M}\_o)\_{\prime}$$

$$\mathbf{P}\_k = \begin{bmatrix} \mathbf{J}^T \\ -\mathbf{G} \end{bmatrix} \mathbf{K} \begin{bmatrix} \mathbf{J} & -\mathbf{G}^T \end{bmatrix}.$$

$$\mathbf{P}\_b = \begin{bmatrix} \mathbf{J}^T \\ -\mathbf{G} \end{bmatrix} \mathbf{B} \begin{bmatrix} \mathbf{J} & -\mathbf{G}^T \end{bmatrix}.$$

Concerning the contact forces, we then obtain

$$\mathbf{t}' = \mathbf{t} - \mathbf{t}\_0 = \mathbf{K}(\mathbf{J}\delta\mathbf{q} - \mathbf{G}^T \delta\mathbf{u}) + \mathbf{B}(\mathbf{J}\delta\dot{\mathbf{q}} - \mathbf{G}^T \delta\dot{\mathbf{u}}),\tag{8}$$

and in terms of matrices, we have

**t** ′ = **Ctx**,

where the output matrix of the contact force is as follows:

**<sup>C</sup>***<sup>t</sup>* <sup>=</sup> [ **KJ** <sup>−</sup>**KG***<sup>T</sup>* **BJ** <sup>−</sup>**BG***<sup>T</sup>* ]. (9)

The properties of grasping defined as follows have a relevant influence on the dynamic behaviour of the manipulation system, refs. [25,47]. These properties are based on the existence of the null spaces of the Jacobian and grasp matrices **J** and **G** and of their transpose matrices.

**Definition 1.** *A grasp (or manipulation system) is considered defective if* ker(**J** *T* ) ≠ **0***.*

**Definition 2.** *A grasp is considered indeterminate if* ker(**G***<sup>T</sup>* ) ≠ **0***.*

If a grasp is indeterminate, there exist motions of the objects under which no variations of contact forces occur. In other words, indeterminacy implies that the object is not firmly grasped.

**Definition 3.** *A manipulation system is considered graspable if* ker(**G**) ≠ **0***.*

If a system is graspable, it is possible to exert contact forces with zero resultant forces on the object. Usually in the literature, the forces belonging to the null space of **G** are referred to as internal forces. Finally, the well-known notion of manipulator redundancy is formalised as follows.

**Definition 4.** *A grasp is considered redundant if* ker(**J**) ≠ **0***.*

**Proposition 1.** *If a system is not indeterminate, i.e.,* ker(**G***<sup>T</sup>* ) = **0***, then the minimal* **A***-invariant subspace containing* im(**B***τ*)*,* minI(**A**,**B***τ*)*, is externally stable.*

From now on, we will assume that the considered system is not indeterminate ker(**G***<sup>T</sup>* ) = **0**. Concerning the coordinate movements of the object, the following proposition [47] show that the subspace **JΓ** *T qc* <sup>=</sup> **<sup>G</sup>***T***<sup>Γ</sup>** *T uc*. of rigid-body motions is reachable.

**Proposition 2.** *The rigid kinematics are described by the base matrix* **Γ** *whose columns form a basis for*

$$\ker\left[\begin{array}{cc}\mathbf{J} & \mathbf{G}^{T} \\ \end{array}\right] = \operatorname{im}(\Gamma), \tag{10}$$

*where* **Γ** = [**Γ** *T qc* **Γ** *T uc*]*.*

**Proposition 3.** *Let the subspace of rigid-body motions be defined as the column space of* **T***c, where*

$$\mathbf{T}\_{\mathcal{C}} = \begin{bmatrix} \mathbf{T}\_{qc} & \mathbf{0} \\ \mathbf{T}\_{uc} & \mathbf{0} \\ \mathbf{0} & \mathbf{T}\_{qc} \\ \mathbf{0} & \mathbf{T}\_{uc} \end{bmatrix}. \tag{11}$$

*Accordingly, the following holds:*

$$\operatorname{im}(\mathbf{T}\_{\mathcal{C}}) \subseteq \operatorname{min} \mathcal{L}(\mathbf{A}, \mathbf{B}\_{\mathsf{T}}) .$$

#### **3. Reachable Internal Contact Forces**

Contact forces **t** are exerted on an object by the manipulation system in order to maintain the grasp, reject disturbance wrenches **w** and control the object motion. Therefore, the control of contact forces is a fundamental part of the manipulation control problem, as the better the control of forces, the finer the manipulation. In [47], the reachable subspace of contact forces as an outputs of the dynamic system given in Equation (4) was studied. The following theorem provides an explicit formula for the subspace of reachable internal forces.

**Theorem 1.** *Under the hypothesis stating that* **K** *is proportional to* **B***,*

$$\mathcal{R}\_{\mathrm{ti},\mathrm{r}} = \mathrm{im}((\mathbf{I} - \mathbf{K}\mathbf{G}^T(\mathbf{G}\mathbf{K}\mathbf{G}^T)^{-1}\mathbf{G})\,\mathrm{C}\_l) = \mathrm{im}((\mathbf{I} - \mathbf{K}\mathbf{G}^T(\mathbf{G}\mathbf{K}\mathbf{G}^T)^{-1}\mathbf{G})\,\mathbf{K}\mathbf{J}).$$

Then, the output matrix is defined as follows:

$$\mathbf{e}\_{\mathrm{f}i} = \mathbf{E}\_{\mathrm{f}i} \mathbf{x}; \qquad \text{with} \quad \mathbf{E}\_{\mathrm{f}i} = (\mathbf{I} - \mathbf{K} \mathbf{G}^T (\mathbf{G} \mathbf{K} \mathbf{G}^T)^{-1} \mathbf{G}) \mathbf{C}\_{\mathrm{f}} = \begin{bmatrix} \mathbf{Q}\_k & \mathbf{0} & \mathbf{Q}\_{\boldsymbol{\beta}} & \mathbf{0} \end{bmatrix} . \tag{12}$$

where

$$\mathbf{Q}\_k = (\mathbf{I} - \mathbf{K}\mathbf{G}^T(\mathbf{G}\mathbf{K}\mathbf{G}^T)^{-1}\mathbf{G})\mathbf{K}\mathbf{J}.\tag{13}$$

and

$$\mathbf{Q}\_{\hat{\beta}} = (\mathbf{I} - \mathbf{B}\mathbf{G}^{T}(\mathbf{G}\mathbf{B}\mathbf{G}^{T})^{-1}\mathbf{G})\mathbf{B}\mathbf{J}.\tag{14}$$

It should be remarked that im(**Q***k*) = im(**Q***β*) under the hypothesis im(**K**) = im(**B**).

#### **4. Noninteraction as a Structural Property**

The present section aims to analyse noninteraction as a control property for a general grasping mechanism with respect to the rigid-body object motions and the reachable contact forces together with the possible mechanism redundancy. The geometric approach is used for this analysis. It should be remarked that the earliest geometric approaches to noninteracting control where proposed by Basile and Marro [19,20] and to Wonham and Morse [21,22,27]. The results of this section address the force/motion noninteracting control of general manipulation mechanisms and are based on necessary and sufficient conditions for the existence of the noninteraction control law given in [19,20]. We now proceed to analyse noninteraction as a structural property of general manipulation systems by formalizing the notion of force/motion noninteraction.

**Definition 5.** *A control law for the dynamic system in Equation (4) is noninteracting with respect to the regulated outputs* **e***uc*, **e***ti and* **e***qr if there exists a partition τuc*, *τti and τqr of the input vector <sup>τ</sup> such that for an initial condition of zero, each input <sup>τ</sup>*(⋅) *(with all the other inputs, also being zero) only affects the corresponding output* **<sup>e</sup>**(⋅) *.*

#### *The Fundamental Theorem*

The following theorem shows that the noninteraction of the regulated outputs **e***uc*, **e***ti* and **e***qr* for the dynamic system in Equation (4), is an intrinsic structural property of general manipulation systems. Assume that following hypothesis:

**H1.** *The manipulation mechanism is not indeterminate that is,* ker(**G***<sup>T</sup>* ) = **0***.*

Then, the following theorem holds.

**Theorem 2** (Noninteraction)**.** *Consider the linearized manipulation system given in Equation (4). Under Hypothesis H1, there exists a noninteracting control law decoupling the following outputs:*


**Proof.** Under hypothesis H1, the couple (**A**,**B***τ*) is stabilisable (Proposition 1). Moreover, under H1 for the linearized system in Equation (4), it is a simple matter to verify that the system is detectable based on the informative output **y** = (**q** *T* ,**t** *T* ) *T* . Then, there exists an observer–based controller noninteracting with respect to the regulated outputs **e***uc*, **e***ti* and **e***qr*. Recall that following:

$$\mathbf{e}\_{\rm{l\mathcal{K}}} = \mathbf{E}\_{\rm{l\mathcal{K}}} \mathbf{x} = \begin{bmatrix} \mathbf{0} & \mathbf{I}^{P}\_{\rm{l\mathcal{K}}} & \mathbf{0} & \mathbf{0} \end{bmatrix} \mathbf{x};\tag{15}$$

$$\mathbf{e}\_{\text{f}i} = \mathbf{E}\_{\text{f}i}\mathbf{x} = \begin{bmatrix} \mathbf{Q}\_k & \mathbf{0} & \mathbf{Q}\_{\beta} & \mathbf{0} \end{bmatrix} \mathbf{x}\_{\text{\\_}} \tag{16}$$

$$\mathbf{e}\_{q\mathcal{V}} = \mathbf{E}\_{q\mathcal{V}}\mathbf{x} = \begin{bmatrix} & \mathbf{F}\_r^P \mathbf{M}\_h & \mathbf{0} & \mathbf{0} & \mathbf{0} \end{bmatrix} \mathbf{x}. \tag{17}$$

Based on the theorem in [20], emerges that the outputs **e***uc*, **e***ti* and **e***qr* are noninteracting if and only if

$$\begin{aligned} \mathbf{E}\_{\textit{i}\textit{i}\textit{C}} \mathcal{R}\_{\textit{\mathcal{K}}\_{\textit{i}\textit{\mathcal{K}}}} &= \mathsf{im}\{\mathbf{E}\_{\textit{i}\textit{i}\textit{\mathcal{E}}}\}, \\ \mathbf{E}\_{\textit{i}\textit{i}} \mathcal{R}\_{\textit{\mathcal{K}}\_{\textit{i}\textit{i}}} &= \mathsf{im}\{\mathbf{E}\_{\textit{i}\textit{i}}\}, \\ \mathbf{E}\_{\textit{q}\textit{r}} \mathcal{R}\_{\textit{\mathcal{K}}\_{\textit{q}\textit{r}}} &= \mathsf{im}\{\mathbf{E}\_{\textit{q}\textit{r}}\}, \end{aligned} \tag{18}$$

where

$$\begin{aligned} \mathcal{K}\_{\textit{luc}} &= \ker(\mathbf{E}\_{\textit{lvi}}) \cap \ker(\mathbf{E}\_{\textit{qir}})\_{\prime} \\ \mathcal{K}\_{\textit{lvi}} &= \ker(\mathbf{E}\_{\textit{luc}}) \cap \ker(\mathbf{E}\_{\textit{qir}})\_{\prime} \\ \mathcal{K}\_{\textit{qir}} &= \ker(\mathbf{E}\_{\textit{lvi}}) \cap \ker(\mathbf{E}\_{\textit{luc}}) .\end{aligned} \tag{19}$$

Here, <sup>R</sup>K(⋅) denotes the <sup>K</sup>(⋅) -constrained controllability subspace, which is the subspace of all the points reachable through trajectories leaving the origin and belonging to <sup>K</sup>(⋅) . We go on to prove the equalities in Equation (18). To simplify the proof, we replace the intersection subspaces in Equation (19) with suitable subspaces whose constrained controllability sets suffice for our purposes. The demonstration is provided in Appendixes A and B. ◻

#### **5. Case Study**

Considering theorem in [34] which states that for the linearized manipulation system under the hypothesis ker(**G***<sup>T</sup>* ) = {**0**}, it is possible to find a stabilizing state–feedback control law, *τ* = **Fx** + *τ* ∗ and an input partition *τ* <sup>∗</sup> = **U***ti***u***ti* + **U***uc***u***uc* which realize a noninteracting control of the reachable internal forces **t***<sup>i</sup>* and rigid–body object motions **u***<sup>c</sup>* as follows:

$$\begin{aligned} \left( \mathbf{E}\_{ti\prime} \cdot \mathbf{A} + \mathbf{B}\_{\mathsf{T}} \mathbf{F}\_{\prime} \, \mathbf{B}\_{\mathsf{T}} \mathbf{U}\_{ti} \right) \prime \\\\ \left( \mathbf{E}\_{\mathrm{uc}\prime} \, \mathbf{A} + \mathbf{B}\_{\mathsf{T}} \mathbf{F}\_{\prime} \, \mathbf{B}\_{\mathsf{T}} \mathbf{U}\_{\mathrm{uc}\prime} \right) \prime \end{aligned} \tag{20}$$

it holds:

$$\begin{aligned} \mathcal{R}\_{ti} &= \min \mathcal{Z} \{ \mathbf{A} + \mathbf{B}\_{\mathsf{T}} \mathbf{F}\_{\mathsf{T}} \, \mathbf{B}\_{\mathsf{T}} \mathbf{U}\_{ti} \} \subseteq \ker(\mathbf{E}\_{\mathsf{u}\mathsf{c}}), \\ \mathbf{E}\_{ti} \mathcal{R}\_{ti} &= \text{im}(\mathbf{E}\_{ti}), \end{aligned} \tag{21}$$

$$\mathcal{R}\_{\text{uc}} = \min \mathcal{Z}(\mathbf{A} + \mathbf{B}\_{\text{T}} \mathbf{F}\_{\text{}}, \mathbf{B}\_{\text{T}} \mathbf{U}\_{\text{uc}}) \in \ker(\mathbf{E}\_{\text{f}\hat{\mathbf{i}}}),\tag{22}$$

$$\mathbf{E}\_{\mathsf{i}\mathsf{i}\mathcal{L}}\mathcal{R}\_{\mathsf{i}\mathsf{i}\mathcal{L}} = \mathsf{im}\left(\mathbf{E}\_{\mathsf{i}\mathsf{i}\mathcal{L}}\right).$$

The partition matrices **U***uc* and **U***ti* are such that the following conditions are satisfied:

$$\begin{aligned} \operatorname{im}(\mathbf{B}\_{\tau}\mathbf{U}\_{\scriptscriptstyle\rm luc}) &= \operatorname{im}(\mathbf{B}\_{\tau}) \cap \mathcal{R}\_{\scriptscriptstyle\rm luc} \\ \operatorname{im}(\mathbf{B}\_{\tau}\mathbf{U}\_{\scriptscriptstyle\rm luc}) &= \operatorname{im}(\mathbf{B}\_{\tau}) \cap \mathcal{R}\_{\scriptscriptstyle\rm luc} \end{aligned} \tag{23}$$

and matrix **F** satisfies the following conditions:

$$\begin{aligned} \mathsf{(A+B\_{\mathsf{T}}\mathsf{F})} \mathcal{R}\_{\mathsf{U}\mathsf{C}} &\subseteq \mathcal{R}\_{\mathsf{U}\mathsf{C}} \\ \mathsf{(A+B\_{\mathsf{T}}\mathsf{F})} \mathcal{R}\_{\mathsf{f}\mathsf{i}} &\subseteq \mathcal{R}\_{\mathsf{f}\mathsf{i}} .\end{aligned} \tag{24}$$

The decoupling controller is that sketched in Figure 1.

**Figure 1.** Force/motion decoupling controller.

In this section numerical results are reported for the simple defective gripper pictorially described in Figure 2.

**Figure 2.** Planar 3–DoF's Cartesian manipulator. It exhibits a defective (ker(**J** *T* ) = **0**) grasp.

It is a planar 3–DoF's Cartesian manipulator and has been chosen in order to show the effectiveness of previous results for industrial grippers. In the base frame B, the contact *centroids*, see [48], are **c**<sup>1</sup> = (2, 2), **c**<sup>2</sup> = (2, 3) and object center of mass is *c<sup>b</sup>* = (2, 2.5) while the transpose of the Jacobian and the grasp matrix assume the following values

$$\mathbf{J}^T = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}; \quad \mathbf{G} = \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0.5 & 0 & -0.5 & 0 \end{bmatrix}.$$

The inertia matrices of the object and manipulator along with stiffness and damping matrices at the contacts are assumed to be normalized to the identity matrix. The controlled outputs are (a) the projection **t***<sup>i</sup>* of the contact forces along the 1–dimensional subspace of reachable contact force im([0 1 0 − 1] *T* ) and (b) the projection of the rigid–body motion in the 2–dimensional subspace of object motions im ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 1 0 0 1 0 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ which, since **u** = [*δ***x** *δ***y** *δθ*] *T* ,

corresponds to translations of the object.

#### *General Procedure*

The objective of the control is twofold. First, force and motion control must be decoupled, then the perfect tracking of desired trajectories **t***id* and **u***cd* can be achieved. The decoupling controller is pictorially described in Figure 1 and has been synthesized, according to Section 4, Equations (20), (23) and (24). State–feedback matrix **F** and input partition matrix **U** = [ **U***ti* **U***uc* ] are obtained respectively according to the following procedure:

• Item 1: Considering Equation (21), the reachable subspace of the internal contact force is calculated:

$$\mathcal{R}\_{t\bar{t}} = \left(\mathbf{E}\_{t\bar{t}}^T \mathbf{E}\_{t\bar{t}}\right)^{-1} \mathbf{E}\_{t\bar{t}}^T \text{im}\left(\mathbf{E}\_{t\bar{t}}\right). \tag{25}$$

• Item 2: Once R*ti* is obtained, partition **U***ti* using (23), is obtained as follows:

$$\text{sim}(\mathbf{U}\_{ti}) = \left(\mathbf{B}\_{\tau}^{T}\mathbf{B}\_{\tau}\right)^{-1}\mathbf{B}\_{\tau}^{T}\text{im}(\mathbf{B}\_{\tau}) \cap \mathcal{R}\_{ti}.\tag{26}$$

• Item 3: Considering Equation (21), matrix **F***ti* is calculated such that the following condition is satisfied:

$$\mathcal{R}\_{\text{ti}} = \min \mathcal{L}(\mathbf{A} + \mathbf{B}\_{\text{\tau}} \mathbf{F}\_{\text{ti}}, \mathbf{B}\_{\text{\tau}} \mathbf{U}\_{\text{ti}}) \subseteq \ker(\mathbf{E}\_{\text{uc}}).\tag{27}$$

• Item 4: Considering Equation (22), the reachable subspace of the internal coordinated movements is calculated as follows:

$$\mathcal{R}\_{\rm uc} = \left(\mathbf{E}\_{\rm uc}^T \mathbf{E}\_{\rm uc}\right)^{-1} \mathbf{E}\_{\rm uc}^T \text{im}\left(\mathbf{E}\_{\rm uc}\right). \tag{28}$$

• Item 5: Once R*uc* is obtained, partition **U***uc* using (23), is obtained as follows:

$$\text{sim}(\mathbf{U}\_{\text{uc}}) = (\mathbf{B}\_{\tau}^{T}\mathbf{B}\_{\tau})^{-1}\mathbf{B}\_{\tau}^{T}\text{sim}(\mathbf{B}\_{\tau}) \cap \mathcal{R}\_{\text{uc}}.\tag{29}$$

• Item 6: Considering Equation (22), matrix **F***uc* is calculated such that the following condition

is satisfied:

$$\mathcal{R}\_{\mathsf{uuc}} = \min \mathcal{T}(\mathbf{A}\_{t\bar{t}} + \mathbf{B}\_{\mathsf{T}} \mathbf{F}\_{\mathsf{uuc}}, \mathbf{B}\_{\mathsf{T}} \mathbf{U}\_{\mathsf{uuc}}) \in \ker(\mathbf{E}\_{t\bar{t}}).\tag{30}$$

• Item 7 : The final state–feedback noninteracting matrix is the following:

$$\mathbf{F} = \mathbf{F}\_{t\bar{t}} + \mathbf{F}\_{\iota\iota c}.\tag{31}$$

End

Matrix **F***ti* realizes the invariance of the internal contact forces. In this context, it is possible to squeeze the object without moving it. Using matrix **F***ti* the following noninteracting transition matrix is obtained:

$$\mathbf{A}\_{l\bar{l}} = \mathbf{A} + \mathbf{B}\_{\mathsf{T}} \mathbf{F}\_{l\bar{l}}.\tag{32}$$

In the same way, matrix **A***ti*, defined in Equation (32), together with matrix **F***uc* realize the invariance of the subspace of the object motions. In this context, it is possible to move the object without squeezing it. Thanks to matrix **F***uc* the following transition matrix which realizes the noninteracting control system is obtained:

$$\mathbf{A}\_{dec} = \mathbf{A}\_{li} + \mathbf{B}\_{\mathsf{T}} \mathbf{F}\_{uc}. \tag{33}$$

To go more in depth, Equation (31) is obtained from **A***ti* = **A** + **B***τ***F***ti* and **A***dec* = **A***ti* + **B***τ***F***uc*. A combination of these two relations yields:

$$\mathbf{A}\_{dec} = \mathbf{A} + \mathbf{B}\_{\tau} (\mathbf{F}\_{t\bar{i}} + \mathbf{F}\_{\text{uc}})\_{\prime} \tag{34}$$

and Equation (31) comes from Equation (34). Considering numerical data, the following matrices are obtained:

$$\mathbf{F} = \begin{bmatrix} -7 & 6.5 & -6 & -1 & -41 & 0 & -7.5 & -0.02 & -5.5 & -3 & -22 & 0\\ 10 & -120 & 10 & -72 & 5 & 0 & 0.29 & -16 & 0.29 & 7.2 & -6.2 & 0\\ -6.1 & 6.5 & -7.1 & -0.97 & -41 & 0 & -5.5 & -0.021 & -7.5 & -3.1 & -22 & 0 \end{bmatrix},$$

$$\mathbf{U}\_{ti} = \begin{bmatrix} -0.707\\ 0\\ 0.707 \end{bmatrix}, \qquad \mathbf{U}\_{uc} = \begin{bmatrix} 0 & -0.707\\ 1 & 0\\ 0 & -0.707 \end{bmatrix}.$$

Considering an angular velocity of 0.1 rad/s and **u***<sup>o</sup>* of coordinates (2.5, 1) as possible starting point, see Figure 2, the control task consists of maintaining the contact force to the constant value of **t***<sup>o</sup>* = [0; 1; 0;−1] *T* . The joint forces *τ* <sup>∗</sup> = **U***ti***u***ti* + **U***uc***u***uc* represents the control law which guarantees the perfect tracking of desired object motions with the desired internal force **t***<sup>i</sup>* , see Figure 3. The required circular trajectory of the center of mass of the object is represented in Figure 2.

**Figure 3.** Internal force **t***<sup>i</sup>* perfectly tracks the constant internal force while the object center of mass perfectly tracks the unit circle as depicted in Figure 2.

It is worthwhile to remark that for a simple industrial gripper, under the reasonable hypothesis that the angular dynamics of the object can be disregarded, linearized dynamics represents the complete description of manipulation system dynamics.

#### **6. Conclusions and Future Work**

This paper considered the problem of noninteracting control in a linearized general manipulation systems. The geometric approach was used throughout the paper. The main results demonstrate that, in general, there always exists an observer-based control law that is noninteracting with respect to the aforementioned outputs. Note that the generality of our approach allows for the consideration of this force/motion noninteraction as a structural property of general manipulation systems. A possible future work can include the analysis of the robustness of the proposed theorem including also a robust design of the controller. Moreover, also a possible noninteraction realized using a feedback from the measured output and its corresponding robust control design should be taken into consideration.

**Funding:** This research received no external funding.

**Data Availability Statement:** The data used to support this research article are available upon request to the author.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Nomenclature**


#### **Appendix A**

This appendix outlines Theorem 2 in all its formal aspects.

Before proceeding to prove Theorem 2, certain additional notation and results are required.

Let us define the subspaces of the state spaces im(**T***r*), im(**T***i*), im(**T***h*) and im(**T***d*), where

$$\begin{aligned} \mathbf{T}\_{r} &= \begin{bmatrix} \mathbf{T}\_{r} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Gamma\_{r} \\ \mathbf{0} & \mathbf{0} \end{bmatrix}; \quad \mathbf{T}\_{i} = \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \Gamma\_{i} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Gamma\_{i} \end{bmatrix}, \\\\ \mathbf{T}\_{h} &= \begin{bmatrix} \mathbf{T}\_{h} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Gamma\_{h} \\ \mathbf{0} & \mathbf{0} \end{bmatrix}, \quad \mathbf{T}\_{d} = \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \Gamma\_{d} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Gamma\_{d} \end{bmatrix}. \end{aligned} \tag{A1}$$

Here **Γ***<sup>r</sup>* , **Γ***<sup>i</sup>* , **Γ***<sup>h</sup>* and **Γ***<sup>d</sup>* are the basis matrices for the subspaces previously defined. In particular, **Γ***<sup>r</sup>* is a basis matrix for ker(**J**), and im(**Γ***i*) = **0** because our system is not indeterminate. Regarding the other subspaces, the following is established:

$$\begin{aligned} \boldsymbol{\Gamma}\_{h} &= \text{b.m. of } \text{im}(\mathbf{M}\_{h}^{-1}\mathbf{J}^{T}) \cap \text{m} \boldsymbol{\omega} \mathcal{Z}(\mathbf{M}\_{h}^{-1}\mathbf{J}^{T}\mathbf{K}\mathbf{J}, \text{ker}(\mathbf{G}\mathbf{K}\mathbf{J})), \\ \boldsymbol{\Gamma}\_{d} &= \text{b.m. of } \text{im}(\mathbf{M}\_{o}^{-1}\mathbf{G}) \cap \text{m} \boldsymbol{\omega} \mathcal{Z}(\mathbf{M}\_{o}^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^{T}, \text{ker}(\mathbf{J}^{T}\mathbf{K}\mathbf{G}^{T})). \end{aligned} \tag{A2}$$

*Appendix A.1. Demonstration of the Noninteraction Theorem*

We begin with the calculation of <sup>R</sup>K(⋅) and, in particular, with the calculation of the subspaces included in <sup>R</sup>K(⋅) . In this appendix, ker(**Q***k*) and ker(**Q***β*) will be calculated. It may be useful to remark that ker(**Q***k*) = ker(**Q***β*) under the hypothesis of proportionality outlined above.

$$\ker(\mathbf{E}\_{ti}) = \ker \begin{bmatrix} \mathbf{Q}\_k & \mathbf{0} & \mathbf{Q}\_\beta & \mathbf{0} \end{bmatrix} \ni \text{im}(\mathbf{L}\_{ti})\_\times$$

where

$$
\mathbf{L}\_{ti} = \begin{bmatrix}
\Gamma\_{qr} & \mathbf{0} & \Gamma\_{qc} & \mathbf{0} & \Gamma\_{qc} & \mathbf{0} & \Gamma\_{qc} & \mathbf{0} & \Gamma\_{qc} & \mathbf{0} & \dots \\
\mathbf{0} & \mathbf{0} & \Gamma\_{uc} & \mathbf{0} & -\Gamma\_{uc} & \mathbf{0} & -\mathbf{H}\Gamma\_{uc} & \mathbf{0} & -\mathbf{H}^{2}\Gamma\_{uc} & \mathbf{0} & \dots \\
\mathbf{0} & \Gamma\_{qr} & \mathbf{0} & \Gamma\_{qc} & \mathbf{0} & \Gamma\_{qc} & \mathbf{0} & \Gamma\_{qc} & \mathbf{0} & \Gamma\_{qc} & \dots \\
\mathbf{0} & \mathbf{0} & \mathbf{0} & \Gamma\_{uc} & \mathbf{0} & -\Gamma\_{uc} & \mathbf{0} & -\mathbf{H}\Gamma\_{uc} & \mathbf{0} & -\mathbf{H}^{2}\Gamma\_{uc} & \dots
\end{bmatrix}
$$

and **<sup>H</sup>** <sup>=</sup> **<sup>M</sup>**−<sup>1</sup> *<sup>o</sup>* **GBG***<sup>T</sup>* . It can be recalled that **B** is proportional to **K**. In the same way

$$\ker(\mathbf{E}\_{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\boldsymbol{\cdots}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}$$
}}}}}}}}}}}}}}}}}

where

$$\mathbf{L}\_{\textit{uuc}} = \begin{bmatrix} \boldsymbol{\Gamma}\_{q\boldsymbol{r}} & \mathbf{0} & \boldsymbol{\Gamma}\_{h} & \mathbf{0} & \mathbf{S}\_{q} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \ker(\boldsymbol{\Gamma}\_{\textit{uuc}}^{T}) \cap \mathbf{S}\_{u} & \mathbf{0} \\ \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{r}} & \mathbf{0} & \boldsymbol{\Gamma}\_{h} & \mathbf{0} & \mathbf{S}\_{q} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \ker(\boldsymbol{\Gamma}\_{\textit{ucc}}^{T}) \cap \mathbf{S}\_{u} \end{bmatrix} \tag{A3}$$

with

$$\mathbf{S}\_q = \min \mathcal{Z}(\mathbf{M}\_h^{-1} \mathbf{J}^T \mathbf{K} \mathbf{J}, \mathbf{M}\_h^{-1} \mathbf{J}^T \mathbf{K} \mathbf{G}^T) \tag{A4}$$

and

$$\mathbf{S}\_{\mathsf{U}} = \min \mathbb{Z}(\mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^T, \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J}). \tag{A5}$$

Finally, it can be recalled that **Γ***<sup>h</sup>* is a basis matrix of

$$\operatorname{Im}(\mathbf{M}\_h^{-1}\mathbf{J}^T) \cap \max \mathcal{L}(\mathbf{M}\_h^{-1}\mathbf{J}^T\mathbf{K}\mathbf{J}, \ker(\mathbf{G}\mathbf{K}\mathbf{J})).\tag{A6}$$

Regarding the subspace

$$\ker(\mathbf{E}\_{qr}) = \ker \left[ \begin{array}{cc} \mathbf{I} \ \mathbf{F}\_r^P \mathbf{M}\_h & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \end{array} \right],$$

it is very easy to check

$$\ker(\mathbf{E}\_{qr}) \ni \operatorname{im}(\mathbf{L}\_{qr})$$

with

$$\text{im}(\mathbf{L}\_{qr}) = \text{im}\begin{bmatrix} \mathbf{T}\_h & \mathbf{T}\_c & \mathbf{T}\_a & \mathbf{T}\_d \end{bmatrix} \tag{A7}$$

where the matrices **T***<sup>h</sup>* , **T***c*, **T***<sup>a</sup>* and **T***<sup>d</sup>* are previously defined. It is useful to note that im(**L***qr*) includes all subspaces except for the redundant movements subspace. We here begin the calculation with the intersection in Equation (19):

$$\operatorname{im}(\mathbf{L}\_{\iota\iota c}) \cap \operatorname{im}(\mathbf{L}\_{qr}) \supseteq \operatorname{im}(\mathbf{B}\_{ti})\_\lambda$$

with

$$\mathbf{B}\_{fi} = \begin{bmatrix} \Gamma\_h & \mathbf{0} & \mathbf{S}\_q & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \ker(\Gamma\_{uc}^T) \cap \mathbf{S}\_u & \mathbf{0} \\ \mathbf{0} & \Gamma\_h & \mathbf{0} & \mathbf{S}\_q & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \ker(\Gamma\_{uc}^T) \cap \mathbf{S}\_u \end{bmatrix}. \tag{A8}$$

Equation (A8) states a subspace included in the above intersection. In fact, by Equation (A7), the subspace im(**L***qr*) includes all subspaces except for the redundant movements. The following calculation is now given:

$$\operatorname{im}(\mathbf{L}\_{ti}) \cap \operatorname{im}(\mathbf{L}\_{qr}) \supseteq \operatorname{im}(\mathbf{B}\_{\iota c}),$$

where

$$\mathbf{B}\_{\scriptscriptstyle{\mathsf{M}}\boldsymbol{\mathcal{C}}} = \begin{bmatrix} \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \dots \\ \boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \mathbf{0} & -\boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \mathbf{0} & -\mathbf{H}\boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \mathbf{0} & -\mathbf{H}^{2}\boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \mathbf{0} & \dots \\ \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \dots \\ \mathbf{0} & \boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \mathbf{0} & -\boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \mathbf{0} & -\mathbf{H}\boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \mathbf{0} & -\mathbf{H}^{2}\boldsymbol{\Gamma}\_{\scriptscriptstyle{\mathsf{u}}\boldsymbol{c}} & \dots \end{bmatrix}. \tag{A9}$$

This intersection is a result of the definition of **S***<sup>u</sup>* (because **HΓ***uc* ⊆ **S***u*) and of Lemma A5 reported in Appendix B.

Finally,

$$\operatorname{im}(\mathbf{L}\_{ti}) \cap \operatorname{im}(\mathbf{L}\_{uc}) = \operatorname{im}(\mathbf{B}\_{qr})\_{r}$$

where

$$\mathbf{B}\_{qr} = \begin{bmatrix} \Gamma\_r & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Gamma\_r \\ \mathbf{0} & \mathbf{0} \end{bmatrix} \tag{A10}$$

which is very easy to verify.

It the following we will formally prove "part a)" of Theorem 2.

**Proof.** ("Part a)" of Theorem 2) We now calculate

$$\max \mathcal{V}(\mathbf{A}, \operatorname{im}(\mathbf{B}\_{\tau}), \operatorname{im}(\mathbf{B}\_{\operatorname{uc}})) .$$

This calculation it is extremely elementary, and it follows that

$$\max \mathcal{V}(\mathbf{A}, \operatorname{im}(\mathbf{B}\_{\tau}), \operatorname{im}(\mathbf{B}\_{\operatorname{uc}})) = \operatorname{im}(\mathbf{B}\_{\operatorname{uc}}).$$

Next, we calculate

$$\min \mathcal{S}(\mathbf{A}\_\prime \operatorname{im}(\mathbf{B}\_{\mathrm{uc}})\_\prime \operatorname{im}(\mathbf{B}\_\tau)\_\prime).$$

In general, the following holds, independent of the representative basis: It can be recalled that the subspaces are independent of every possible basis.

$$\begin{aligned} \mathcal{Z}\_0 &= \operatorname{im}(\mathbf{B}\_\top)\_\prime \\ \mathcal{Z}\_k &= \mathcal{Z}\_{k-1} + \mathbf{A}(\mathcal{Z}\_{k-1} \cap \operatorname{im}(\mathbf{B}\_{\mu\mathcal{c}}))\_\prime \end{aligned}$$

where

$$\begin{split} \operatorname{im}(\mathbf{B}\_{u\boldsymbol{\mathcal{C}}}) &= \operatorname{im}(\mathbf{L}\_{ti}) \cap \operatorname{im}(\mathbf{L}\_{q\boldsymbol{\mathcal{T}}}), \\ Z\_{1} &= \{\operatorname{im}(\mathbf{B}\_{\tau}) + \mathbf{A}\{\operatorname{im}(\mathbf{B}\_{\tau}) \cap \operatorname{im}(\mathbf{B}\_{u\boldsymbol{\mathcal{C}}})\}\}, \\ Z\_{1} &= \left(\mathbf{B}\_{\tau} + \mathbf{A}\left(\begin{bmatrix} \mathbf{0} \\ \mathbf{0} \\ \mathbf{I}\_{q\boldsymbol{\mathcal{C}}} \\ \mathbf{0} \end{bmatrix}\right)\right), \\ Z\_{1} &= \operatorname{im}\begin{bmatrix} \mathbf{I}\_{q\boldsymbol{\mathcal{C}}} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{M}\_{h}^{-1}\mathbf{J}^{T}\mathbf{B}\mathbf{J}\mathbf{I}\_{q\boldsymbol{\mathcal{C}}} & \mathbf{M}\_{h}^{-1} \\ \mathbf{M}\_{o}^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{I}\_{q\boldsymbol{\mathcal{C}}} & \mathbf{0} \end{bmatrix}, \\ Z\_{1} &= \operatorname{im}\begin{bmatrix} \mathbf{I}\_{q\boldsymbol{\mathcal{C}}} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{M}\_{h}^{-1} \\ \mathbf{M}\_{o}^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{I}\_{q\boldsymbol{\mathcal{C}}} & \mathbf{0} \end{bmatrix}. \end{split}$$

We now calculate

$$\mathcal{Z}\_2 = \text{im}(\mathbf{B}\_\tau) + \mathbf{A} (\mathcal{Z}\_1 \cap \text{im}(\mathbf{B}\_{uc})) .$$

Next, the following emerges:

$$\operatorname{im} \begin{bmatrix} \Gamma\_{qc} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \Gamma\_{qc} \end{bmatrix} \equiv \operatorname{im} (\mathbf{B}\_{uc})\_{\prime\prime}$$

this shows that the intersection can be separately calculated. In fact, it is not useless to remember that for the subspaces intersections it is possible to use the distributive property only if at least one of the subspaces is included in the other subspace.

$$\text{Now, } \text{im}\begin{bmatrix} \mathbf{0} \\ \mathbf{0} \\ \mathbf{M}\_h^{-1} \\ \mathbf{0} \end{bmatrix} \cap \text{im}(\mathbf{B}\_{u\mathcal{C}}) = \text{im}\begin{bmatrix} \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \end{bmatrix} \\ \text{because } \mathbf{M}\_h^{-1} \text{ has full rank. The other intersection is} \\ \text{Then,} $$

tion ∀ *a* ∃ *b*, *c*, *d* and *e* can be calculated as follows:

$$
\begin{bmatrix}
\boldsymbol{\Gamma}\_{qc} \\
\mathbf{0} \\
\mathbf{0} \\
\mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \boldsymbol{\Gamma}\_{qc} \\
\end{bmatrix} a = \begin{bmatrix}
\boldsymbol{\Gamma}\_{qc} \\
\boldsymbol{\Gamma}\_{uc} \\
\mathbf{0} \\
\mathbf{0} \\
\end{bmatrix} b + \begin{bmatrix}
\mathbf{0} \\
\mathbf{0} \\
\boldsymbol{\Gamma}\_{qc} \\
\boldsymbol{\Gamma}\_{uc} \\
\end{bmatrix} c + \begin{bmatrix}
\boldsymbol{\Gamma}\_{qc} \\
\mathbf{0} \\
\mathbf{0} \\
\end{bmatrix} d + \begin{bmatrix}
\mathbf{0} \\
\mathbf{0} \\
\boldsymbol{\Gamma}\_{qc} \\
\end{bmatrix} e.
$$

It is easy to see that *c* = −*e* and *d* = −*b*. Thus,

$$\mathcal{Z}\_1 \cap \text{im}(\mathbf{B}\_{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\scriptscriptstyle{\mathcal{\boldsymbol{\boldsymbol{\scriptscriptstyle{\scriptscriptstyle{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\boldsymbol{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\boldsymbol{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\boldsymbol{\mathcal{\mathcal{\boldsymbol{\boldsymbol{\cdots}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}$$
}}}}}}}}

Now,

$$\mathcal{Z}\_2 = \text{im}\begin{bmatrix} \mathbf{0} & \Gamma\_{qc} & \mathbf{0} \\ \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \Gamma\_{qc} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{M}\_{h}^{-1} \\ \mathbf{\varchi}\_1 & \mathbf{\varchi}\_2 & \mathbf{0} \end{bmatrix}.$$

$$\dim(\mathbf{X}\_1) = \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \Gamma\_{q\mathcal{E}} - \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{G}^T (\mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \Gamma\_{q\mathcal{E}}) $$

and

$$\operatorname{im}(\mathbf{X}\_2) = \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \mathbf{T}\_{qc}.$$

This can be written as

$$\mathcal{Z}\_2 = \text{im} \begin{bmatrix} -\boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} \\ \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \boldsymbol{\Gamma}\_{q\boldsymbol{c}} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{M}\_h^{-1} \\ -\mathbf{H}^2 \boldsymbol{\Gamma}\_{\boldsymbol{\mu}\boldsymbol{c}} & \mathbf{H} \boldsymbol{\Gamma}\_{\boldsymbol{\mu}\boldsymbol{c}} & \mathbf{0} \end{bmatrix}.$$

This calculation does not need to determine the minimum subspace exactly, and the resulting subspace is sufficint to test the condition: It is useful to remember that R*Buc* = maxV(**A**, im(**B***τ*), im(**B***uc*)) ∩ minS(**A**, im(**B***uc*), im(**B***τ*)) in our case we have that R*Buc* ⊇ im(**B***uc*) ∩ Z2, and Z<sup>2</sup> ⊆ Z<sup>∞</sup> at the end im(**B***uc*) = maxV(**A**, im(**B***τ*), im(**B***uc*)). It can then be concluded that if **E***uc*(im(**B***uc*) ∩ Z2) = im(**E***uc*), it will also be true that **E***uc*(R*Buc* ) = im(**E***uc*).

$$\mathcal{R}\_{\mathbf{B}\_{\scriptscriptstyle{uc}}} \supseteq \max \mathcal{V}(\mathbf{A}, \operatorname{im}(\mathbf{B}\_{\scriptscriptstyle{\tau}}), \operatorname{im}(\mathbf{B}\_{\scriptscriptstyle{uc}})) \cap \mathcal{Z}\_2.$$

This calculation is simple:

$$
\mathcal{R}\_{\mathbf{B}\_{\mathrm{uc}}} \ni \operatorname{im} \begin{bmatrix}
\Gamma\_{q\boldsymbol{c}} & -\Gamma\_{q\boldsymbol{c}} & \mathbf{0} \\
\mathbf{0} & \mathbf{M}\_{\boldsymbol{o}}^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \Gamma\_{q\boldsymbol{c}} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \Gamma\_{q\boldsymbol{c}} \\
\mathbf{H} \Gamma\_{\boldsymbol{u}\boldsymbol{c}} & -\mathbf{H}^{2} \Gamma\_{\boldsymbol{u}\boldsymbol{c}} & \mathbf{0}
\end{bmatrix}.
$$

To complete the proof, it remains to be verified that

$$\mathbf{E}\_{\mathsf{u}c} \mathcal{R}\_{\mathbf{B}\_{\mathsf{u}c}} = \mathsf{im}(\mathbf{E}\_{\mathsf{u}c}) .$$

This is trivial; in fact,

$$\mathbf{E}\_{\mathsf{u}\mathcal{C}} = \mathbf{T}\_{\mathsf{u}\mathcal{C}} \left(\mathbf{T}\_{\mathsf{u}\mathcal{C}}^T \mathbf{T}\_{\mathsf{u}\mathcal{C}}\right)^{-1} \left[\begin{array}{cc} \mathbf{0} & \mathbf{T}\_{\mathsf{u}\mathcal{C}}^T & \mathbf{0} & \mathbf{0} \end{array}\right].$$

The theorem is thus proved, and **<sup>J</sup>Γ***qc* <sup>=</sup> **<sup>G</sup>***T***Γ***uc* and **<sup>Γ</sup>** *T uc***M**−<sup>1</sup> *<sup>o</sup>* **GBG***T***Γ***uc* have full rank. <sup>◻</sup>

It the following we will formally prove "part b)" of Theorem 2.

**Proof.** ("Part b)" of Theorem 2)

We begin by calculating the controlled invariant subspace

maxV(**A**, im(**B***τ*), im(**B***ti*))

and the conditioned invariant subspace

$$\min \mathcal{S}(\mathbf{A}, \operatorname{im}(\mathbf{B}\_{li}), \operatorname{im}(\mathbf{B}\_{\tau})).$$

To calculate the first of the above subspaces, is sufficient to find a subspace im(**V**) controlled invariant in (**A**,**B***τ*) and included in im(**B***ti*) with the following structure: To realise this kind of proof it is not necessary to find a controlled invariant subspace. Instead, it is sufficient to consider a subspace included in im(**B***ti*). This choice is helpfull in designing the contoller. In fact, this choice is constructive and the resolvent subspace must be controlled invariant.

$$\mathbf{V} = \begin{bmatrix} \Gamma\_h & \mathbf{0} & \mathbf{S}\_q \mathbf{Z} & \mathbf{0} & \mathbf{M}\_1 & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{M}\_b & \mathbf{M}\_2 & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Gamma\_h & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{S}\_q \mathbf{Z} & \mathbf{0} & \mathbf{M}\_1 \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{M}\_b & \mathbf{M}\_2 \end{bmatrix}. \tag{A11}$$

Here, **Z** is such that

$$\dim(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{S}\_q\mathbf{Z}) = \text{im}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{S}\_q) \cap \ker(\Gamma\_{uc}^T). \tag{A12}$$

The subspace im(**V**) must be controlled invariant, and it is necessary that:

$$\mathbf{A}\mathfrak{im}(\mathbf{V}) \subseteq \mathfrak{im}(\mathbf{V}) + \mathfrak{im}(\mathbf{B}\_{\mathsf{T}}),\tag{A13}$$

$$\text{im}(\mathbf{V}) \subseteq \text{im}(\mathbf{B}\_{ti}).\tag{A14}$$

Condition (A14) is satisfied if:

$$\dim(\mathbf{M}\_1) \subseteq \mathbf{S}\_{q\_{\prime}} \tag{A15}$$

$$\text{im}(\mathbf{M}\_2) \subseteq \ker(\Gamma\_{\mathfrak{u}\mathfrak{c}}^T),\tag{A16}$$

$$\text{im}(\mathbf{M}\_b) \subseteq \ker(\boldsymbol{\Gamma}\_{uc}^T). \tag{A17}$$

Furthermore, condition (A13) it is satisfied if

$$\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{S}\_q\mathbf{Z} \subseteq \text{im}\begin{bmatrix} \mathbf{M}\_b & \mathbf{M}\_2 \ \prime \end{bmatrix} \tag{A18}$$

$$-\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^T\mathbf{M}\_b \subseteq \text{im}\begin{bmatrix} \ \mathbf{M}\_b & \ \mathbf{M}\_2 & \end{bmatrix} \tag{A19}$$

$$\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{M}\_1 - \mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^T\mathbf{M}\_2 \subseteq \text{im}\left[\mathbf{M}\_b\,\mathbf{M}\_2\right].\tag{A20}$$

In Appendix B, it is demonstrated that if **S***<sup>q</sup>* ≠ **0**, then it is always possible to resolve the last three relations:

$$\mathbf{im} \left[ \begin{array}{cc} \mathbf{M}\_b & \mathbf{M}\_2 \end{array} \right] \neq \mathbf{0}.$$

We now calculate

$$\min \mathcal{S}(\mathbf{A}, \operatorname{im}(\mathbf{B}\_{ti}), \operatorname{im}(\mathbf{B}\_{\tau})) $$

using the following algorithm:

$$\begin{aligned} \mathcal{Z}\_0 &= \text{im}(\mathbf{B}\_\tau)\_\prime \\ \mathcal{Z}\_k &= \mathcal{Z}\_{k-1} + \mathbf{A}(\mathcal{Z}\_{k-1} \cap \text{im}(\mathbf{B}\_{ti}))\_k \end{aligned}$$

where **A** is defined as

$$\mathbf{A} = \begin{bmatrix} \mathbf{0} & \mathbf{0} & \mathbf{I}\_{\boldsymbol{\theta}} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{I}\_{\boldsymbol{\mu}} \\ -\mathbf{M}\_{\boldsymbol{h}}^{-1}\mathbf{J}^{T}\mathbf{K}\mathbf{J} & \mathbf{M}\_{\boldsymbol{h}}^{-1}\mathbf{J}^{T}\mathbf{K}\mathbf{G}^{T} & -\mathbf{M}\_{\boldsymbol{h}}^{-1}\mathbf{J}^{T}\mathbf{B}\mathbf{J} & \mathbf{M}\_{\boldsymbol{h}}^{-1}\mathbf{J}^{T}\mathbf{B}\mathbf{G}^{T} \\ \mathbf{M}\_{\boldsymbol{o}}^{-1}\mathbf{G}\mathbf{K}\mathbf{J} & -\mathbf{M}\_{\boldsymbol{o}}^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^{T} & \mathbf{M}\_{\boldsymbol{o}}^{-1}\mathbf{G}\mathbf{B}\mathbf{J} & -\mathbf{M}\_{\boldsymbol{o}}^{-1}\mathbf{G}\mathbf{B}\mathbf{G}^{T} \end{bmatrix}$$

and

$$\mathbf{B}\_{\tau} = \begin{bmatrix} \mathbf{0} \\ \mathbf{0} \\ \mathbf{M}\_{h}^{-1} \\ \mathbf{0} \end{bmatrix}.$$

Now, the following holds:

$$(\mathcal{Z}\_0 \cap \text{im}(\mathbf{B}\_{ti})) = \text{im}\begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \Gamma\_{li} & \mathbf{S}\_q \mathbf{Z} \\ \mathbf{0} & \mathbf{0} \end{bmatrix}.$$

where **Z** is such that

$$\dim(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{JS}\_q\mathbf{Z}) = \dim(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{JS}\_q) \cap \ker(\Gamma\_{uc}^T). \tag{A21}$$

This involves

$$\mathcal{Z}\_1 = \text{im} \begin{bmatrix} \mathbf{I}\_h & \mathbf{S}\_q \mathbf{Z} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{M}\_h^{-1} \\ \mathbf{0} & \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{B} \mathbf{J} \mathbf{S}\_q \mathbf{Z} & \mathbf{0} \end{bmatrix}.$$

This subspace is not conditioned invariant, but it will be sufficient for our demonstration. In fact, the dimension of this subspace is sufficient to guarantee the rank condition. Now, it is easy to show that

$$\mathcal{R}\_{\mathbf{B}\_{ti}} \supseteq \max \mathcal{V}(\mathbf{A}, \operatorname{im}(\mathbf{B}\_{\tau}), \operatorname{im}(\mathbf{B}\_{ti})) \cap \mathcal{Z}\_1.$$

This calculation is simple, and

$$
\mathcal{R}\_{\mathbf{B}\_{\mathrm{fl}}} \cong \operatorname{im} \begin{bmatrix}
\Gamma\_{\mathrm{fl}} & \mathbf{0} & \mathbf{S}\_{q}\mathbf{Z} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & \Gamma\_{\mathrm{fl}} & \mathbf{0} & \mathbf{S}\_{q}\mathbf{Z} \\
\mathbf{0} & \mathbf{0} & \mathbf{M}\_{o}^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{S}\_{q}\mathbf{Z} & \mathbf{0}
\end{bmatrix}.
$$

It is possible to verify that this subspace is not a self-hidden controlled invariant subspace in im(**B***ti*) and that it is not controlled invariant, but this is not necessary for the present proof. It may be useful to recall that R*Bti* = maxV(**A**, im(**B***τ*), im(**B***ti*)) ∩ minS(**A**, im(**B***ti*), im(**B***τ*)) and that in the present case, we have R*Bti* ⊇ im(**V**) ∩ Z1. Recalling that Z<sup>1</sup> ⊆ Z<sup>∞</sup> by im(**V**) ⊆ im(**B***ti*), we can conclude that **E***ti*(im(**V**) ∩ Z1) = im(**E***ti*), **E***ti*(R*Bti* ) = im(**E***ti*) will also hold. To conclude it will be proved that

$$\mathbf{E}\_{ti}\mathcal{R}\_{B\_{ti}} = \text{im}(\mathbf{E}\_{ti}).\tag{A22}$$

It has been shown that the outputs were defined as follows:

$$\begin{aligned} \mathbf{e}\_{li} &= \mathbf{E}\_{li} \mathbf{x}\_{\prime} \\ \text{with} \\ \mathbf{E}\_{li} &= \begin{pmatrix} \mathbf{I} - \mathbf{K} \mathbf{G}^{T} (\mathbf{G} \mathbf{K} \mathbf{G}^{T})^{-1} \mathbf{G} \end{pmatrix} \mathbf{C}\_{l} = \begin{bmatrix} \mathbf{Q}\_{k} & \mathbf{0} & \mathbf{Q}\_{\beta} & \mathbf{0} \end{bmatrix} \end{aligned} \tag{A23}$$

where

$$\mathbf{Q} = \mathbf{Q}\_k = \mathbf{Q}\_\beta = (\mathbf{I} - \mathbf{K}\mathbf{G}^T(\mathbf{G}\mathbf{K}\mathbf{G}^T)^{-1}\mathbf{G})\mathbf{K}\mathbf{J}.\tag{A24}$$

We next calculate the null subspace of **Q**.

**Remark A1.** *The null subspace of* **Q** *can be easily calculated. In fact,* ker(**Q**) = ker(**J**) + V*, where* <sup>V</sup> <sup>=</sup> {**v**∣**KJv** <sup>∈</sup> ker(**<sup>I</sup>** <sup>−</sup> **KG***<sup>T</sup>* (**GKG***<sup>T</sup>* ) <sup>−</sup>1**G**) <sup>=</sup> im(**KG***<sup>T</sup>* ), **v** ∉ ker(**J**)}. *By Equation (10) it is easy to show that* V = im(**Γ***qc*) *and thus that:*

$$\ker(\mathbf{Q}) = \text{im}(\Gamma\_r) + \text{im}(\Gamma\_{qc}).\tag{A25}$$

◻

The following two Lemmas demonstrate the useful property

$$\mathbf{E}\_{ti}\mathcal{R}\_{B\_{ti}} = \text{im}(\mathbf{E}\_{ti}),\tag{A26}$$

which is equivalent to

$$\operatorname{im}(\mathbf{Q}[\quad\Gamma\_h\quad\mathbf{S}\_q\mathbf{Z}\quad])=\operatorname{im}(\mathbf{Q}).$$

To prove Equation (A26), we show that

$$\ker(\mathbf{Q}) \cap \text{im}\left[ \begin{array}{cc} \mathbf{T}\_{\hbar} & \mathbf{S}\_{\hbar}\mathbf{Z} \end{array} \right] = \mathbf{0},\tag{A27}$$

$$\mathsf{rank}(\left[\begin{array}{cc}\mathbf{I}\_{\mathsf{h}} & \mathbf{S}\_{\mathsf{q}}\mathbf{Z} \end{array}\right]) = \mathsf{rank}(\mathbf{Q}).\tag{A28}$$

**Lemma A1.**

$$\ker(\mathbf{Q}) \cap \text{im}\left[ \begin{array}{cc} \mathbf{T}\_h & \mathbf{S}\_q \mathbf{Z} \end{array} \right] = \mathbf{0}. \tag{A29}$$

**Proof.** If we begin from the previous Remark A1, Equation (A29) can be verified by determining whether the vectors **x**, **y**, **v** and **w** exist such that

$$
\Gamma\_r \mathbf{x} + \Gamma\_{qc} \mathbf{y} = \Gamma\_h \mathbf{v} + \mathbf{S}\_q \mathbf{Z} \mathbf{w}.
$$

In fact, by im(**Γ***qc*), **im**(**Γ***h*) and im(**S***q*) are included in im(**M***<sup>h</sup>* −1 **J** *T* ), while im(**Γ***r*) is not, because it is included in ker(**J**). In general, given a linear application **L**, im(**L** *T* ) + ker(**L**) = **I**. Thus, the above equation can be written in the following form:

$$
\Gamma\_{qc}\mathbf{y} = \Gamma\_{lh}\mathbf{v} + \mathbf{S}\_{\bar{q}}\mathbf{Z}\mathbf{w}.
$$

If this equation holds, then

$$\mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \Gamma\_{q \boldsymbol{\varepsilon}} \mathbf{y} = \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \Gamma\_h \mathbf{v} + \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \mathbf{S}\_q \mathbf{Z} \mathbf{w} \dots$$

By Equation (10) and **Γ***<sup>h</sup>* ⊆ ker(**GKJ**) in Equation (A6), we can deduce the following:

$$\mathbf{M}\_{\boldsymbol{\varrho}}^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^{T} \boldsymbol{\Gamma}\_{\boldsymbol{\mu} \boldsymbol{\varepsilon}} \mathbf{y} = \mathbf{M}\_{\boldsymbol{\varrho}}^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \mathbf{S}\_{\boldsymbol{\eta}} \mathbf{Z} \mathbf{w}.$$

However, this is never verified. Due to the choice of **Z M**−<sup>1</sup> *<sup>o</sup>* **GKJS***q***Z** ⊆ ker(**Γ** *T uc*), while it will be easy to show that if **M**−<sup>1</sup> *<sup>o</sup>* **GKG***T***Γ***uc* <sup>⊆</sup> ker(**<sup>Γ</sup>** *T uc*), then the matrix **<sup>M</sup>**−<sup>1</sup> *<sup>o</sup>* **GKG***<sup>T</sup>* will be an orthogonal projector. However, this is not true because it is not in a projector form. It is useful to recall that given a subspace L with a basis matrix is **L**, ker(**L** *T* ) = (im(**L**))<sup>⊥</sup> and the ortogonal proiector is (**I** − **L**(**L <sup>T</sup>L**) <sup>−</sup>**1L T** ).

This demonstrates that condition (A29) is proven. ◻

**Lemma A2.**

$$\begin{aligned} \mathsf{rank}[\ulcorner \Gamma\_h \urcorner \mathbf{S}\_q \mathbf{Z} \urcorner] &= \mathsf{rank}(\Gamma\_h) + \mathsf{rank}(\mathbf{S}\_q \mathbf{Z}) \\ &= q - r - c. \end{aligned}$$

**Proof.** The first equality is derived from the null intersection between im(**Γ***h*) and im(**S***q***Z**). In fact, by condition (A6), im(**Γ***h*) is a subspace of maxI(**M**−<sup>1</sup> *h* **J** *<sup>T</sup>***KJ**, ker(**GKJ**)) orthogonal to im(**M**−<sup>1</sup> *h* **S***q*) in accordance with Equation (A4). The proof of the second equality of the lemma begins with the following consideretions. First,

$$\max \mathcal{L}(\mathbf{M}\_h^{-1} \mathbf{J}^T \mathbf{K} \mathbf{J}, \ker(\mathbf{G} \mathbf{K} \mathbf{J})) = \text{im}(\mathbf{M}\_h^{-1} \mathbf{S}\_q)^\perp \lambda$$

from which it follows that

$$\operatorname{im}(\mathbf{M}\_h^{-1}\mathbf{J}^T) \subseteq \max \mathcal{Z}(\mathbf{M}\_h^{-1}\mathbf{J}^T\mathbf{K}\mathbf{J}, \ker(\mathbf{G}\mathbf{K}\mathbf{J})) \oplus \operatorname{im}(\mathbf{M}\_h^{-1}\mathbf{S}\_q).$$

Now, by Equation (A4) im(**M**−<sup>1</sup> *h* **<sup>S</sup>***q*) <sup>⊆</sup> im(**M**−<sup>1</sup> *h* **J** *T* ) and from the above inclusion and the definition of **Γ***<sup>h</sup>* in Equation (A6), it follows that

$$\begin{split} \operatorname{im}(\mathbf{M}\_{h}^{-1}\mathbf{J}^{T}) &= \mathbf{M}\_{h}^{-1}\mathbf{J}^{T} \cap \left( \operatorname{max}\mathcal{Z}(\mathbf{M}\_{h}^{-1}\mathbf{J}^{T}\mathbf{K}\mathbf{J}, \ker(\mathbf{G}\mathbf{K}\mathbf{J})) \oplus \operatorname{im}(\mathbf{M}\_{h}^{-1}\mathbf{S}\_{q}) \right) \\ &= \left(\mathbf{M}\_{h}^{-1}\mathbf{J}^{T} \cap \operatorname{max}\mathcal{Z}(\mathbf{M}\_{h}^{-1}\mathbf{J}^{T}\mathbf{K}\mathbf{J}, \ker(\mathbf{G}\mathbf{K}\mathbf{J})) \right) \oplus \operatorname{im}(\mathbf{M}\_{h}^{-1}\mathbf{S}\_{q}) \\ &= \operatorname{im}(\Gamma\_{h}) \oplus \operatorname{im}(\mathbf{M}\_{h}^{-1}\mathbf{S}\_{q}) . \end{split}$$

$$\text{We thus obtain } \begin{aligned} \text{rank}(\mathbf{I}\_h) + \text{rank}(\mathbf{S}\_q) &= \text{rank}(\mathbf{M}\_h^{-1} \mathbf{J}^T) = \text{rank}(\mathbf{J}) = q - r \end{aligned}$$

and 
$$r\text{rank}(\Gamma\_h) = q - r - r\text{rank}(\mathbf{S}\_q). \tag{A30}$$

The rank of **S***q***Z** remains to be calculated. Recalling that **S***<sup>q</sup>* and **Z** are basis matrices and that from Equation (A12) rank(**Z**) ≤ rank(**S***q*), it follows that

$$\mathsf{rank}(\mathbf{S}\_{\emptyset}\mathbf{Z}) = \mathsf{rank}(\mathbf{Z}).\tag{A31}$$

By the introduction of **Z** in Equation (A12), it follows that

$$\mathsf{rank}(\mathbf{Z}) = \mathsf{rank}(\mathbf{S}\_q) - \mathsf{rank}(\mathbf{Z}^\perp). \tag{A32}$$

Here, rank(**S***q*) is the number of components of **z** ∈ **Z**. The last part of this demonstration consists of estimating rank(**Z** ⊥ ), which, by Equation (A12) is

$$\mathsf{rank}(\mathbf{Z}^{\perp}) = \mathsf{rank}(\mathbf{S}\_q^T \mathbf{J}^T \mathbf{K} \mathbf{G}^T \mathbf{M}\_o^{-1} \mathbf{T}\_{\mathsf{u}\mathcal{L}}).$$

By Equation (A4), it is easy to show that ker(**S** *T q* ) ⊆ ker(**GKJ**). Thus, ker(**S** *T q* ) ∩ im(**J** *<sup>T</sup>***KG***<sup>T</sup>* ) = **0**, and

$$\mathsf{rank}(\mathbf{Z}^{\perp}) = \mathsf{rank}(\mathbf{J}^{T}\mathbf{K}\mathbf{G}^{T}\mathbf{M}\_{o}^{-1}\mathbf{T}\_{uc}).\tag{A33}$$

Now, we prove that

$$\mathsf{rank}(\mathbf{Z}^{\perp}) = \mathsf{rank}(\mathbf{J}^{T}\mathbf{K}\mathbf{G}^{T}\mathbf{M}\_{\mathrm{o}}^{-1}\mathbf{T}\_{\mathrm{uc}}) = \mathsf{rank}(\mathbf{T}\_{\mathrm{uc}}) = \mathsf{c}.\tag{A34}$$

If we transpose Equation (A33), the following hold:

$$\mathsf{rank}(\mathbf{Z}^{\perp}) = \mathsf{rank}(\mathbf{I}\_{uc}^{T}\mathbf{M}\_{o}^{-1}\mathbf{G}\mathbf{K}\mathbf{J})\_{-}$$

By Equation (10)

$$\mathsf{rank}(\mathbf{Z}^{\perp}) = \mathsf{rank}(\mathbf{I}\_{\mathsf{uc}}^{T}\mathbf{M}\_{\mathsf{o}}^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^{T}\mathbf{I}\_{\mathsf{uc}}) = \mathsf{rank}(\mathbf{I}\_{\mathsf{uc}})\_{\mathsf{v}}$$

where the last equality follows because the matrix **Γ** *T uc***M**−<sup>1</sup> *<sup>o</sup>* **GKG***T***Γ***uc* has full rank. Finally, by Equations (A31), (A32) and (A34), it can be concluded that

$$\mathsf{rank}(\mathbf{S}\_{\mathsf{q}}\mathbf{Z}) = \mathsf{rank}(\mathbf{S}\_{\mathsf{q}}) - c.$$

Comparing this last result with Equation (A30)

$$\mathsf{rank}\Big[\mathsf{ }\mathsf{T}\_{h}\quad\mathsf{S}\_{q}\mathsf{Z}\quad\Big]=q-r-c.\,.$$

◻

**Remark A2.** *The Equation (A28) was proved only if in the case of kinematic defectivity* (ker(**J** *T* )) ≠ **0**)*, i.e., with* **<sup>J</sup>** <sup>∈</sup> <sup>R</sup>(*t*×*q*) *, thus only in the case of t* > *q. It is easy to prove that t* ≤ *q is a trivial extension. Let r and c be the ranks of the matrices* **Γ***<sup>r</sup> and* **Γ***uc, respectively. It follows that* rank(**J**) = *q* − *r*. *By Lemma A2,* rank[ **Γ***<sup>h</sup>* **S***q***Z** ] = rank(**Γ***h*) + rank(**S***q***Z**) = *q* − *r* − *c*. *In conclusion, Equation (A28) demonstrates that*

$$\mathsf{rank}(\mathbf{Q}) = q - r - c\_r$$

*which follows trivially from Equation (A25). In fact,* rank(**Q**) <sup>=</sup> rank(**Q***<sup>T</sup>* ) = *q* − rank(ker(**Q**)) = *q* − (*r* + *c*)*.*

It the following we will formally prove "part c)" of Theorem 2.

**Proof.** ("Part c)" of Theorem 2)

It is possible to show the following:

$$\operatorname{im}(\mathbf{L}\_{ti}) \cap \operatorname{im}(\mathbf{L}\_{uc}) \ni \operatorname{im} \begin{bmatrix} \Gamma\_r & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \Gamma\_r \\ \mathbf{0} & \mathbf{0} \end{bmatrix}.$$

This subspace is **A**-invariant and thus guarantees the necessary conditions. By

$$\mathbf{E}\_{qr} = \mathbf{T}\_r (\mathbf{T}\_r^T \mathbf{T}\_r)^{-1} \begin{bmatrix} \ \mathbf{T}\_r^T \mathbf{M}\_h & \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \end{bmatrix} \mathbf{J}$$

even the rank condition is invariant. ◻

#### **Appendix B**

In this appendix, we provide several technical results useful for the calculations given in Appendix A.

**Lemma A3.** *Let* **<sup>S</sup>***<sup>q</sup> and* **<sup>S</sup>***<sup>u</sup> be basis matrices of* minI(**M**−<sup>1</sup> *h* **J** *<sup>T</sup>***KJ**,**M**−<sup>1</sup> *h* **J** *<sup>T</sup>***KG***<sup>T</sup>* ) *and* minI(**M**−<sup>1</sup> *<sup>o</sup>* **GKG***<sup>T</sup>* ,**M**−<sup>1</sup> *<sup>o</sup>* **GKJ**), *respectively. Then,*

$$\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{S}\_q \in \mathbf{S}\_u.$$

**Proof.** Being

$$\mathbf{S}\_q = \min \mathcal{Z}(\mathbf{M}\_h^{-1} \mathbf{J}^T \mathbf{K} \mathbf{J}, \mathbf{M}\_h^{-1} \mathbf{J}^T \mathbf{K} \mathbf{G}^T), \text{and} \tag{A35}$$

$$\mathbf{S}\_{\mathsf{U}} = \min \mathcal{Z}(\mathbf{M}\_{\mathsf{o}}^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^{T}, \mathbf{M}\_{\mathsf{o}}^{-1} \mathbf{G} \mathbf{K} \mathbf{J}). \tag{A36}$$

Now,

$$\mathbf{S}\_{\boldsymbol{\mu}}^{\perp} = \max \mathcal{Z}(\mathbf{G} \mathbf{K} \mathbf{G}^{T} \mathbf{M}\_{\boldsymbol{\phi}}^{-1}, \ker(\mathbf{J}^{T} \mathbf{K} \mathbf{G}^{T} \mathbf{M}\_{\boldsymbol{\phi}}^{-1})),\tag{A37}$$

$$(\mathbf{S}\_{\boldsymbol{\mu}})^{\perp} \subseteq \ker(\mathbf{J}^{T}\mathbf{K}\mathbf{G}^{T}\mathbf{M}\_{o}^{-1}),\tag{A38}$$

$$\mathbb{E}\left(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{S}\_q\right)^\perp = \ker(\mathbf{S}\_q^T\mathbf{J}^T\mathbf{B}\mathbf{G}^T\mathbf{M}\_o^{-1}) \supseteq (\ker(\mathbf{J}^T\mathbf{B}\mathbf{G}^T\mathbf{M}\_o^{-1}).\tag{A39}$$

Thus,

$$(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{S}\_q)^\perp \ni \mathbf{S}\_{\mu\nu}^\perp \tag{A40}$$

and finally,

$$\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{S}\_q \subseteq \mathbf{S}\_u.\tag{A41}$$

◻

**Lemma A4.** *Let* **S***<sup>q</sup> and* **S***<sup>u</sup> above be defined. It follows that*

$$\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{S}\_q \cap \ker(\Gamma\_{\boldsymbol{\iota}\boldsymbol{\iota}\boldsymbol{\iota}}^T) \cap \mathbf{S}\_\boldsymbol{\iota} = \mathbf{M}\_o^{-1}\mathbf{G}\mathbf{B}\mathbf{J}\mathbf{S}\_q\mathbf{Z}.$$

**Proof.** Recalling the definition of **<sup>Z</sup>**, it follows that im(**M**−<sup>1</sup> *<sup>o</sup>* **GKJS***q***Z**) <sup>=</sup> im(**M**−<sup>1</sup> *<sup>o</sup>* **GKJS***q*) ∩ ker(**Γ** *T uc*), although it remains to be demonstrated that **<sup>M</sup>**−<sup>1</sup> *<sup>o</sup>* **GBJS***q***<sup>Z</sup>** <sup>∩</sup> **<sup>S</sup>***<sup>u</sup>* <sup>=</sup> **<sup>M</sup>**−<sup>1</sup> *<sup>o</sup>* **GKJS***q***Z**. This follows immediately by the previous Lemma A3. ◻

**Lemma A5.** *The complementary subspace* im(**T***a*) *was defined in [49] as the* deforming motions subspace*. It is possible to choose a complementary subspace* im(**T***a*) *such that*

$$\operatorname{im} \begin{bmatrix} \mathbf{T}\_{qc} & \mathbf{0} \\ -\mathbf{T}\_{uc} & \mathbf{0} \\ \mathbf{0} & \mathbf{T}\_{qc} \\ \mathbf{0} & -\mathbf{T}\_{uc} \end{bmatrix} \in \operatorname{im}(\mathbf{T}\_a).$$

This part of the appendix discusses, through the following lemma, three necessary conditions to obtain the controlled invariant subspace im(**V**) as pointed out in Appendix A.

**Lemma A6.**

$$\begin{aligned} \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \mathbf{S}\_q \mathbf{Z} &\subseteq \text{im} \Big[\mathbf{M}\_b \; \mathbf{M}\_2 \; \big]\_{\prime} \\ - \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^T \mathbf{M}\_b &\subseteq \text{im} \Big[\mathbf{M}\_b \; \mathbf{M}\_2 \; \big]\_{\prime} \\ \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \mathbf{M}\_1 - \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^T \mathbf{M}\_2 &\subseteq \text{im} \Big[\mathbf{M}\_b \; \mathbf{M}\_2 \; \right]. \end{aligned}$$

**Proof.** The proof starts distinguishing three possible cases depending on ker(**Γ** *T uc*). Case 1:

ker(**Γ***uc*) is **<sup>M</sup>**−<sup>1</sup> *<sup>o</sup>* **GKG***<sup>T</sup>* -invariant.

This is the simplest case. In fact, if we take **M***<sup>b</sup>* = ker(**Γ** *T uc*) and **M**<sup>2</sup> = **0** such that the first and the second equations are satisfied automatically, the third will be satisfied for **M**<sup>1</sup> = **0**. Case 2:

ker(**Γ** *T uc*) /⊇ **<sup>M</sup>**−<sup>1</sup> *<sup>o</sup>* **GKG***<sup>T</sup>* ker(**Γ** *T uc*) and ker(**Γ** *T uc*) <sup>∩</sup> **<sup>M</sup>**−<sup>1</sup> *<sup>o</sup>* **GKG***<sup>T</sup>* ker(**Γ** *T uc*) ≠ **0**. In this case the second equation can be verified by the following:

$$\begin{aligned} \mathbf{M}\_2 &= \ker(\boldsymbol{\Gamma}\_{\boldsymbol{\mu}\boldsymbol{c}}^T), \\ \mathbf{M}\_b: \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^T \mathbf{M}\_b &= \\ \ker(\boldsymbol{\Gamma}\_{\boldsymbol{\mu}\boldsymbol{c}}^T) \cap \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^T \ker(\boldsymbol{\Gamma}\_{\boldsymbol{\mu}\boldsymbol{c}}^T). \end{aligned}$$

Now, the first equation is trivially verified, while the third will be verified if

$$\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^T\ker(\boldsymbol{\Gamma}\_{uc}^T) \subseteq \text{im}\left[\quad \mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{S}\_q \quad \ker(\boldsymbol{\Gamma}\_{uc}^T) \ \right].$$

We will demonstrate that this condition is always verified.

Case 3:

The last case to analyse is that in which

$$\ker(\Gamma\_{uc}^T) \cap \mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{G}^T \ker(\Gamma\_{uc}^T) = \mathbf{0}.$$

Under this condition, the second equation is satisfied only with **M***<sup>b</sup>* = **0**. To satisfy this, it is sufficient to set im(**M**2) = ker(**Γ** *T uc*). This implies the same condition of the second case and thus involves the following condition:

$$\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^T\ker(\boldsymbol{\Gamma}\_{uc}^T) \subseteq \text{im}\left[\quad\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{S}\_q \quad \ker(\boldsymbol{\Gamma}\_{uc}^T)\right].$$

◻

The following lemma shows how this condition is verified.

**Lemma A7.** *If* **S***<sup>q</sup>* ≠ **0***, then the matrix*

$$\begin{bmatrix} \mathbf{M}\_o^{-1} \mathbf{GKJ} \mathbf{S}\_q & \ker(\Gamma\_{uc}^T) \end{bmatrix}$$

*is a basis matrix of the subspace* R*<sup>d</sup> , where d is the dimension of the physical space.* **Proof. <sup>S</sup>***<sup>q</sup>* <sup>=</sup> minI(**M**−<sup>1</sup> *h* **J** *<sup>T</sup>***KJ**,**M**−<sup>1</sup> *h* **J** *<sup>T</sup>***KG***<sup>T</sup>* ) and the **<sup>M</sup>**−<sup>1</sup> *h* is positive definite:

$$\operatorname{im}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}) \ni \operatorname{im}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{S}\_q) \ni \operatorname{im}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{M}\_h^{-1}\mathbf{J}^T\mathbf{K}\mathbf{G}^T) = \operatorname{im}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}).\text{ A}$$

This implies that

$$\operatorname{im}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J}\mathbf{S}\_q) = \operatorname{im}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{J})\dots$$

It is now easy to prove that

$$\begin{aligned} \mathbb{R}^d \ni & \mathop{\rm im}\limits[\![\mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \qquad \text{ker}(\mathbf{T}\_{\scriptscriptstyle{\boldsymbol{\mu}}c}^T) \ \ \ ] \ni & \mathop{\rm im}\limits[\![\mathbf{M}\_o^{-1} \mathbf{G} \mathbf{K} \mathbf{J} \mathbf{T}\_{\scriptscriptstyle{\boldsymbol{\mu}}c} \quad \text{ker}(\mathbf{T}\_{\scriptscriptstyle{\boldsymbol{\mu}}c}^T) \ \ ] \ \ \ ] \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \mathbb{R}^d \end{aligned}$$

and

$$\mathsf{rank}(\mathbf{M}\_o^{-1}\mathbf{G}\mathbf{K}\mathbf{G}^T\boldsymbol{\Gamma}\_{\mathsf{uce}}) = \mathsf{rank}(\boldsymbol{\Gamma}\_{\mathsf{uce}})\_\prime$$

because **M**−<sup>1</sup> *<sup>o</sup>* **GKG***<sup>T</sup>* has a null subspace equal to zero. <sup>◻</sup>

#### **References**

