*Article* **Vision-Based Hybrid Controller to Release a 4-DOF Parallel Robot from a Type II Singularity**

**José L. Pulloquinga 1,\*, Rafael J. Escarabajal 1, Jesús Ferrándiz 1, Marina Vallés 1, Vicente Mata <sup>2</sup> and Mónica Urízar <sup>3</sup>**


**Abstract:** The high accuracy and dynamic performance of parallel robots (PRs) make them suitable to ensure safe operation in human–robot interaction. However, these advantages come at the expense of a reduced workspace and the possible appearance of type II singularities. The latter is due to the loss of control of the PR and requires further analysis to keep the stiffness of the PR even after a singular configuration is reached. All or a subset of the limbs could be responsible for a type II singularity, and they can be detected by using the angle between two output twist screws (OTSs). However, this angle has not been applied in control because it requires an accurate measure of the pose of the PR. This paper proposes a new hybrid controller to release a 4-DOF PR from a type II singularity based on a real time vision system. The vision system data are used to automatically readapt the configuration of the PR by moving the limbs identified by the angle between two OTSs. This controller is intended for a knee rehabilitation PR, and the results show how this release is accomplished with smooth controlled movements where the patient's safety is not compromised.

**Keywords:** singular configuration; parallel robot; motion control; 3D tracking; screw theory

#### **1. Introduction**

Parallel robots (PRs) are composed of two or more closed kinematic chains connecting a fixed and a mobile platform that defines the end-effector to be controlled [1]. As opposed to their serial counterpart, they benefit from greater accuracy, stiffness, and load capacity, making them suitable for a great variety of applications [2,3]. Human–robot interaction is one of the major applications, for instance, in the context of medical rehabilitation [4]. Within this field, lower limb rehabilitation [5–9] is an active research area. However, PRs also present several drawbacks regarding the size of their workspace and the presence of singularities within the workspace. The former can be addressed by means of a proper mechanical design of the PR to cover the workspace as required, while the latter requires further analysis.

Singularities in a PR were first analysed by Gosselin and Angeles [10], who established a classification of singular configurations according to the characteristics of the Jacobian matrices calculated from constraint equations. They defined a type I (or inverse kinematic) singularity to refer to the loss of at least one degree of freedom (DOF) due to a degeneracy of the inverse Jacobian matrix (*JI* = 0) and a type II (or forward kinematic) singularity to indicate the gain of at least one DOF caused by the degeneracy of the forward Jacobian matrix (*JD* = 0). Some other related classifications of singular configurations can be

**Citation:** Pulloquinga, J.L.; Escarabajal, R.J.; Ferrándiz, J.; Vallés, M.; Mata, V.; Urízar, M. Vision-Based Hybrid Controller to Release a 4-DOF Parallel Robot from a Type II Singularity. *Sensors* **2021**, *21*, 4080. https://doi.org/10.3390/s21124080

Academic Editors: Javier Cuadrado and Miguel Ángel Naya Villaverde

Received: 7 May 2021 Accepted: 9 June 2021 Published: 13 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

found in [11,12]. type I singularities typically occur as the manipulator approaches the boundary of the workspace and are easy to detect and avoid, but type II singularities can arise within the workspace and are more difficult to treat [13].

Type II singularities prevent the mobile platform from bearing external forces despite having all the actuators locked, leading to an uncontrolled motion of the end-effector. The main goal of lower-limb rehabilitation is to perform specific movements that stimulate the motor plasticity of the patient to improve the motor recovery [5]. In conventional rehabilitation, the movements of the patient are controlled and monitored by a physiotherapist, while in robotic rehabilitation, the control task is performed by a PR. For this task, a PR must ensure stiff behaviour despite the presence of type II singularities to maintain control during the rehabilitation process.

Extensive research has been conducted to tackle type II singularities. The determinant of the forward Jacobian *JD* gives no further information than the proximity to a singularity, as it lacks a physical meaning [14]. Based on screw theory [15], a transmission index (TI) was designed by Yuan et al. [16] to express the quality of force and motion transmission by using the transmission wrench screw (TWS) and output twist screw (OTS). Takeda and Funabashi [17] designed a TI that expresses how each actuator individually contributes to the motion of the mobile platform by leaving just one actuator active and the rest locked. Subsequently, Wang et al. [18] using the TI proposed by Takeda and Funabashi established that for a type II singularity, at least two OTSs are linearly dependent. Pulloquinga et al. [19] proposed the angle between two instantaneous screw axes from the OTSs (Ω) as a measure for the proximity detection of type II singularities, providing physical meaning and the capability to determine the chains producing the singular configuration.

The extensive analysis of type II singularities presented has been incorporated in motion/force performance evaluation [18,20], path planning, and the design of reconfigurable PRs [21,22]. These analyses have been developed offline, and very little has been found about including this information in the control unit of the PR [23,24]. Abgarwal et al. [24] designed a control scheme to avoid type II singularities of a planar PR by using artificial potential functions. The potential functions are activated near the singularity to alter the trajectory by means of repulsion forces. This setting prevents the PR from entering into a singular configuration by avoiding it. However, an evader controller cannot deal with the situation in which the robot is initially in a type II singularity. Such a task would require extra instrumentation, since solving the forward kinematic problem based on the joint variable measures does not have a single solution. The various possible positions of the mobile platform are due to the degeneracy of the forward Jacobian matrix.

One unambiguous solution to estimate the actual pose of the PR is by using a vision system [25]. Huynh et al. [26] implemented a vision/position hybrid control for a Hexa PR by defining a two-level closed-loop controller. Amarasinghe et al. [27] designed a vision-based hybrid control on a mobile robot. It could autonomously reach a docking station by using a finite-state machine and proportional control combined with image processing.

However, to the best of the authors' knowledge, no research has been published focusing on PR singularity releasing, i.e., letting the robot autonomously get out of a type II singularity. In this paper, a novel algorithm based on online readings from a vision system is proposed to release the PR from a type II singularity. The proposed algorithm is the first to use the angle Ω as an online detector for the proximity to singular configurations. This algorithm is integrated into a two-level closed-loop hybrid controller that results in more compliant manipulation when performing knee rehabilitation tasks. In the inner loop, there is an algebraic closed-loop controller. The outer loop implements a vision-based controller whose algorithm determines the two limbs that most affect the type II singularity by means of the angle Ω. Then, only the references of those two limbs are modified online to feed the inner loop of the controller.

Section 2 describes the 4-DOF PR for knee rehabilitation used to perform the simulations and experiments. Next, the mathematical foundations of type II singularities and

the angle Ω are explained. Then, the 3D vision system that has been used to keep track of the pose of the PR is presented, together with a detailed description of the proposed vision-based hybrid controller. Section 3 begins with a description of the requirements for simulation and experimentation as well as the singular trajectories that were designed for this research. The main results are also presented in this section. Finally, the results are discussed in Section 4.

### **2. Materials and Methods**

This section presents the mathematical foundation used in the development of the angle Ω that detects the proximity to a type II singularity in a knee rehabilitation PR. Subsequently, the 3D tracking system (3DTS) used to measure the actual pose of the mobile platform is described. Then the 3DTS and Ω are combined to develop a novel vision-based hybrid controller to release the actual PR under study from a type II singularity. This section also includes a detailed explanation of the algorithm corresponding to this hybrid controller, which detects and moves the actuators according to the angle Ω.

## *2.1. 3UPS+RPU Parallel Robot*

After knee surgery, the diagnosis and rehabilitation tasks require two translational movements (*xm*, *zm*) in the tibiofemoral plane, one rotation (*ψ*) around the coronal plane and one rotation (*θ*) around the tibiofemoral plane [28]. These four DOFs are shown in Figure 1. In order to accomplish these requirements, a PR with 4-DOF has been designed, built [29], and optimized [30] at the Universitat Politècnica de València. The PR under study is named 3UPS+RPU due to its four-limb architecture. The external limbs or open kinematic chains have a UPS configuration, while the central one has an RPU configuration (see Figure 1). The letters R, U, S and P stand for revolute, universal, spherical and prismatic joints, respectively, and the actuated joints are indicated by the underlined format.

**Figure 1.** Mechanical configuration of the 3UPS+RPU PR.

The kinematic model of the 3UPS+RPU PR is established by 15 generalized coordinates as follows:


The variables *q* are measured with respect to a local reference system attached to each joint. The coordinates *xm*, *zm*, *<sup>θ</sup>*, and *<sup>ψ</sup>* are measured with respect to the reference system *Of* − *XFYFZF* attached to the centre of the fixed platform to reduce the complexity of the model.

The locations of *A*0, *B*0, *C*0, and *D*<sup>0</sup> that link the four limbs to the fixed platform are defined by the geometric variables *R*1, *R*2, *R*3, *βFD*, *βF I*, and *ds*. The locations of *A*1, *B*1, *C*1, and *Om* that link the limbs to the mobile platform are defined by the geometric variables *Rm*1, *Rm*2, *Rm*3, *βMD*, and *βMI*. Figure 1 shows the location of *A*0, *B*0, and *C*<sup>0</sup> and *A*1, *B*1, and *C*<sup>1</sup> on the fixed and mobile platform. Table 1 shows the values of *R*1, *R*2, *R*3, *βFD*, *βF I*, and *ds* measured with respect to *Of* − *XFYFZF* and *Rm*1, *Rm*2, *Rm*3, *βMD*, and *βMI* with respect to {*Om* − *XMYMZM*}. The mobile reference system {*Om* − *XMYMZM*} is attached to the centre of the mobile platform.

**Table 1.** Geometric parameters for the 3UPS+RPU PR.


#### *2.2. Type II Singularities*

The velocity equations of a PR [10] are defined by the time derivatives of the geometrical constraint equations (<sup>→</sup> *Φ*) as follows:

$$\stackrel{\rightarrow}{J\_D X} + \stackrel{\rightarrow}{J\_I} \stackrel{\rightarrow}{q}\_{ind} = \stackrel{\rightarrow}{0} \tag{1}$$

→ *X* is the set of outputs that represents the DOFs of the mobile platform; <sup>→</sup> *q ind* is the set of inputs that corresponds to the active joint length; and *JD* and *JI* are the forward and inverse Jacobian matrices, respectively. *JD* and *JI* are *FxF* square matrices for a non-redundant PR, where *F* is the number of DOF.

A type II singularity takes place when *JD* is rank deficient; i.e., its determinant is zero (*JD* = 0). In this configuration, if an external force is applied to the PR, the mobile platform may move ( . → *X* = 0) despite the actuators being locked ( . → *q ind* = 0). For this reason, in a type II singularity, the control over the PR is lost, becoming potentially dangerous for the user or the robot itself. The PR under study must ensure a safe interaction with the knee of the patient, and, therefore, the treatment of type II singularities is an important problem to solve. A general method to detect the proximity to a type II singularity is by calculating the *JD*.

In the 3UPS+RPU PR, *JD* is defined as a 4 × 4 matrix,

$$J\_D = \begin{bmatrix} \stackrel{\rightarrow}{\partial \Phi} & \stackrel{\rightarrow}{\partial \Phi} & \stackrel{\rightarrow}{\partial \Phi} & \stackrel{\rightarrow}{\partial \Phi} \\ \hline \partial x\_m & \stackrel{\rightarrow}{\partial z\_m} & \stackrel{\rightarrow}{\partial \theta} & \stackrel{\rightarrow}{\partial \psi} \end{bmatrix} \tag{2}$$

with <sup>→</sup> *X* = [*xm zm θ ψ*] *<sup>T</sup>* and <sup>→</sup> *q ind* = [*q*<sup>13</sup> *q*<sup>23</sup> *q*<sup>33</sup> *q*42] *T*.

The online calculation of *JD* requires an accurate measure of <sup>→</sup> *X*. In a model-based controller, <sup>→</sup> *X* is estimated by solving the forward kinematic problem using the sensors installed in the actuated joints. In a type II singularity, the forward kinematic problem presents several feasible solutions, and an unambiguous estimation of the actual <sup>→</sup> *X* is not possible. The accurate measure of the actual <sup>→</sup> *X* of the PR requires direct sensing of the mobile platform by means of extra instrumentation, such as a 3DTS.

#### *2.3. Angle between Two Output Twist Screws*

The motion of the mobile platform of a PR is produced by the combined action of the active joints, making it difficult to identify the contribution of each actuator. Takeda and Funabashi [17] divided the movement of the mobile platform (\$) into *F* OTSs as follows:

$$\mathbf{S} = \rho\_1 \mathbf{\hat{S}}\_{\mathcal{O}1} + \rho\_2 \mathbf{\hat{S}}\_{\mathcal{O}2} + \dots + \rho\_F \mathbf{\hat{S}}\_{\mathcal{O}F} \tag{3}$$

with

$$\pounds\_{\mathcal{O}} = \left(\stackrel{\rightarrow}{\mu}\_{\omega \circ \mathcal{O}'} \stackrel{\rightarrow}{\mu}\_{v \circ \mathcal{O}}^{\*}\right) \tag{4}$$

*<sup>ρ</sup><sup>i</sup>* is the amplitude for each OTS, and <sup>→</sup> *μω<sup>O</sup>* and <sup>→</sup> *μv* ∗ *<sup>O</sup>* are the instantaneous screw axis and the linear component of the normalized OTS (ˆ \$*O*), respectively. Each ˆ \$*<sup>O</sup>* is determined by solving Equation (5) considering that *F* − 1 actuators are locked.

$$\pounds\_{Oi} \circ \pounds\_{T\_j} = 0 \; (i, j = 1, 2, \dots, F, \; i \neq j) \tag{5}$$

where ◦ stands for the reciprocal product, and <sup>ˆ</sup> \$*<sup>T</sup>* is the unitary TWS.

In [18], Wang et al. proved that for a singular configuration of a PR, at least two ˆ \$*O*s are linearly dependent. This means that in a type II singularity, both <sup>→</sup> *μω<sup>O</sup>* and <sup>→</sup> *μv* ∗ *<sup>O</sup>* are equal or parallel. Based on this feature, a novel type II singularity proximity index is defined by measuring the angle <sup>Ω</sup>*i*,*<sup>j</sup>* between two <sup>→</sup> *μωO*s and verifying the equality of their respective → *μv* ∗ *<sup>O</sup>*. Grouping in pairs the *<sup>F</sup>* <sup>→</sup> *μωO*, there are *<sup>F</sup>* 2 angles Ω, which are defined as:

$$\Omega\_{\mathbf{i},\mathbf{j}} = \arccos\left(\stackrel{\rightarrow}{\mu}\_{\omega\mathbf{O}\_i} \cdot \stackrel{\rightarrow}{\mu}\_{\omega\mathbf{O}\_j}\right) \text{ (\$i,j = 1,2,...,F,\$ i \neq j\$)}\tag{6}$$

where *i* and *j* represent the selected <sup>→</sup> *μωO*.

In contrast with the *JD*, the index Ω*i*,*<sup>j</sup>* provides a physical scale for the measure of the proximity to a type II singularity. When <sup>Ω</sup>*i*,*<sup>j</sup>* = 0 and <sup>→</sup> *μv* ∗ *Oi* <sup>=</sup> <sup>→</sup> *μv* ∗ *Oj* , ˆ \$*Oi* and ˆ \$*Oj* are linearly dependent in Equation (3), identifying the open chains (*i*, *j*) involved in the type II singularity. Considering the centre of the mobile platform of the 3UPS+RPU PR, six possible Ω*i*,*j*s are considered:

$$\begin{array}{c} \Omega\_{1,2} = \arccos\left(\stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{1}}} \cdot \stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{2}}}\right) \quad \Omega\_{2,3} = \arccos\left(\stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{2}}} \cdot \stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{3}}}\right) \\\Omega\_{1,3} = \arccos\left(\stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{1}}} \cdot \stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{3}}}\right) \quad \Omega\_{2,4} = \arccos\left(\stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{2}}} \cdot \stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{4}}}\right) \\\Omega\_{1,4} = \arccos\left(\stackrel{\leftarrow}{\mu\_{\omega\text{O}\_{1}}} \cdot \stackrel{\leftarrow}{\mu\_{\omega\text{O}\_{3}}}\right) \quad \Omega\_{3,4} = \arccos\left(\stackrel{\rightarrow}{\mu\_{\omega\text{O}\_{3}}} \cdot \stackrel{\leftarrow}{\mu\_{\omega\text{O}\_{4}}}\right) \end{array} \tag{7}$$

The variables <sup>→</sup> *μω*O1 ... <sup>→</sup> *μω*O4 are calculated by solving Equation (5) with the four <sup>ˆ</sup> \$*T*s of the linear actuators defined as

$$
\hat{\mathbf{\hat{S}}}\_{T1} = \begin{bmatrix} \stackrel{\rightarrow}{\boldsymbol{z}}\_{12} & \stackrel{\rightarrow}{\boldsymbol{\r}}\_{12} \\ \stackrel{\rightarrow}{\boldsymbol{r}}\_{\boldsymbol{On\_1A\_1}} \times \stackrel{\rightarrow}{\boldsymbol{z}}\_{12} \end{bmatrix}, \hat{\mathbf{\hat{S}}}\_{T2} = \begin{bmatrix} \stackrel{\rightarrow}{\boldsymbol{z}}\_{22} \\ \stackrel{\rightarrow}{\boldsymbol{r}}\_{\boldsymbol{On\_1B\_1}} \times \stackrel{\rightarrow}{\boldsymbol{z}}\_{22} \end{bmatrix}, \hat{\mathbf{\hat{S}}}\_{T3} = \begin{bmatrix} \stackrel{\rightarrow}{\boldsymbol{z}}\_{32} \\ \stackrel{\rightarrow}{\boldsymbol{r}}\_{\boldsymbol{On\_1C\_1}} \times \stackrel{\rightarrow}{\boldsymbol{z}}\_{32} \end{bmatrix}, \hat{\mathbf{\hat{S}}}\_{T4} = \begin{bmatrix} \stackrel{\rightarrow}{\boldsymbol{z}}\_{41} \\ \stackrel{\rightarrow}{\boldsymbol{0}}\_{1} \end{bmatrix} \tag{8}
$$

where <sup>→</sup> *z* is the unit vector of the forces applied by the actuators, and <sup>→</sup> *r* is the position vector for the connection point of the limbs with the mobile platform measured from *Om*; see Figure 2.

**Figure 2.** TWSs in the 3UPS+RPU PR.

The capability to detect the proximity to a type II singularity given by the six Ω*i*,*<sup>j</sup>* indices defined in Equation (7) has been verified from an analytical and experimental perspective [19]. However, the capability to identify the pair of limbs responsible for the type II singularity has not been exploited. Therefore, this study proposes a novel hybrid controller that takes advantage of the index Ω*i*,*<sup>j</sup>* to release the 3UPS+RPU PR from a type II singular configuration. The index Ω*i*,*<sup>j</sup>* is defined by means of the position and orientation of the mobile platform. For this reason, an accurate measure of the actual <sup>→</sup> *X* is essential for

*2.4. 3D Tracking System*

developing the hybrid controller proposed.

To be able to capture the movements of the mobile platform of the PR, a 3D tracking system (3DTS) based on artificial vision was used. The system consists of 10 Flex13 cameras from the manufacturer OptiTrack (Corvallis, OR, USA). These cameras use the infrared emission principle to be able to capture and detect the reflection that it creates on markers made of reflective 3M material.

Figure 3 shows the Robotics Laboratory and some cameras of the 3DTS used in this work. The cameras have a 1.3 Megapixel resolution and a capture velocity of 120 Hz. They have a latency or frame delay of 8.3 ms. The set of 10 cameras and the use of high-quality 14 mm markers make it possible to obtain an accuracy of more than 0.1 mm.

**Figure 3.** Robotics Laboratory equipped with the OptiTrack 3DTS.

The cameras are connected to two OptiHub2 devices. The OptiHub2 allows higher and more consistent power delivery to cameras for enhanced tracking range, simpler camera setup and cabling, and support for camera synchronization. The OptiHub2 devices are connected to high-speed USB ports in the camera control computer, and this computer communicates with the robot control computer using an Ethernet connection. The Figure 4 below shows the architecture of the OptiTrack 3DTS of the laboratory.

**Figure 4.** Laboratory OptiTrack 3DTS architecture.

The Motive Tracker software (Motive) from the same manufacturer, OptiTrack, is used on the camera control computer. This software is used to perform vision system calibration and obtain 6-DOF positioning results of objects within the tracking area. Motive uses highlevel tracking filters and constraints to fine tune the performance of the high-speed object tracking. Motive associates a custom set of markers to a virtual element called rigid body and offers data access at any stage in the pipeline, i.e., 2D camera images, marker centroid data, labelled markers, and rigid bodies. In addition, it is possible to completely replace the Motive user interface and directly control the system operation in a new application with the NatNet SDK.

NatNet's client/server architecture allows client applications to run on the same system as the tracking software (Motive), on separate system(s), or both. The SDK integrates seamlessly with standard APIs (C/C++/.NET), tools (Microsoft Visual Studio), and protocols (UDP/Unicast/Multicast). Using the NatNet SDK, developers can quickly integrate OptiTrack motion tracking data into new and existing applications, including custom plugins for third-party applications and engines for real-time streaming. In addition, this SDK provides a .NET interface and sample programs that work directly with MATLAB core, requiring no additional MATLAB modules. Figure 5 summarizes the software architecture of the 3DTS used in this study.

**Figure 5.** Software architecture of the OptiTrack 3DTS.

Regarding the experimental setup, each camera individually builds a 2D image based on the markers' location, so a calibration process is required prior to the experiments in order to ensure that the system correctly reconstructs the 3D position of every marker.

The first step involved in this process is the correct orientation of the cameras to aim them at the workspace and, specifically, at the tracking volume, which, in this case, is the 4-DOF PR. Since the robot always operates in the same location and its workspace is limited, no changes in the camera location or orientation are required, and, therefore, they remain in the same position from the moment they are installed.

Another aspect to control is the brightness and illumination of the scene, as this allows the markers to be visible for the cameras, and, as such, no other unwanted objects are detected. Since the lighting conditions are the same for all experiments, some configuration parameters of the cameras, such as the exposure time, the gain, and the threshold, are set at constant values for all cameras using the software. If any intrusive markers are detected, they can either be manually covered by a cloth or masked in the software before performing the calibration.

After configuring the cameras, the calibration process starts with an empty scene where no markers should be detected, except for those attached to the calibration wand.

By moving the calibration wand, which is provided by OptiTrack, around the workspace, the cameras provide successive 2D projections of the markers. The 2D projections are used to compute the relative position of the cameras. The software shows the increasing precision of this estimation as the process progresses (Figure 6), and when a high enough quality is achieved, the process is manually stopped.

**Figure 6.** Calibration wand and experiment to determine the location of the markers.

The second tool, which concludes the calibration process, is the calibration square shown in Figure 7. This object includes three markers in right angle that define the origin and axes of the world coordinate system (also called ground plane by Motive). The ground plane is placed on the floor within the workspace area in such a way that its markers can be visible by as many cameras as possible. This tool incorporates a level to ensure its horizontal position.

**Figure 7.** Calibration square.

Although all the cameras remain in the same position, minor movements of any of the cameras between experiments (for example, due to vibrations) can lead to poor tracking performance. For that reason, the calibration process must be performed once a day to ensure reliable 3D tracking. The calibration steps take no more than five minutes. After calibrating the cameras, Motive starts streaming data from all rigid bodies within the workspace. Rigid bodies are a set of 3 or more (maximum 20) markers whose relative distances remain constant. In this research, there are 2 rigid bodies represented by the fixed and mobile platform, respectively, and a set of 4 markers was attached to them. Three of the markers describe the cartesian coordinate frame of both platforms, and the fourth is added in a random (but known) position. If one of the markers is missed by the software during an experiment, the other three make it possible to reconstruct its position and keep streaming enough accurate data.

In the PR pose tracking App presented in this paper, the NatNet SDK provides a client class to communicate with the Motive server. A data handler is attached to this client, which works as a call-back that is executed every time there is a new frame of data available from the server. This handler has been used for retrieval of the x, y, z position of three markers placed on the fixed platform and another three placed on the mobile platform of the PR. Given the coordinates of the six markers, the actual position and orientation of the mobile platform (<sup>→</sup> *Xc*) are calculated with respect to the *Of* − *XFYFZF* coordinate frame. Finally, <sup>→</sup> *Xc* is sent through ROS2 messages to feed the control system. A MATLAB program has also been designed to provide an online view of <sup>→</sup> *Xc* and calculate the actual actuator's length by solving the inverse kinematics. Figure 8 presents the graphic user interface (GUI) for online measures of <sup>→</sup> *Xc*. It is important to note that this program is independent of the control system (and therefore runs in a personal computer) and simply offers a viewing tool.

**Figure 8.** GUI for position and orientation tracking designed in MATLAB.
