**2. P/RML Assisted by the CAS Equipped with RM and MVS**

*2.1. The Hardware Architecture of the P/RML Assisted by CAS*

The P/RML Festo MPS 200, presented in Figure 1, is a configurable laboratory mechatronic system consisting of four workstations. The four workstations are controlled by an individual Siemens S7 300 PLC that operates in different stages: buffering, handling, processing, and sorting:


**Figure 1.** Festo MPS 200 assisted by CAS (ARS PeopleBot equipped with RM Cyton 1500 and MVS).

RM Cyton is used for recovering the pieces which are then transported by the ARS PeopleBot. The RM is equipped with a camera used for the eye in hand subsystem mounted on the end-effector. All the elements are controlled by a computer, the camera via USB connection, ARS PeopleBot, and RM Cyton by Wi-Fi with the help of an Advanced Robotic Interface for Applications (ARIA) package [20–23]. The combined subsystems are presented in Figure 2.

### *2.2. Eye in Hand MVS*

An eye in hand MVS type refers to a system where the visual sensor is placed on the last joint of the RM and the camera movements also affect RM movements. Another type of MVS commonly used is eye to hand, where, contrary to eye in hand, the sensor is fixed in a position relative to the work environment, where both the RM and the workpieces are observed. For image-based visual servoing (IBVS), 2D image information is being used for an estimation of the desired motion of the robot. Standard tasks such as detection, tracking and positioning are realized by minimizing the error between the features extracted from the current image and the visual features of the desired image [13].

In the case of IBVS architecture, the visual sensor that extracts information about the work environment can be either eye in hand implementation, where the movement of the robot induces the movement of the camera, or eye to hand, where the RM and its motions can be examined from a fixed point. A common method used in object detection, classification, and shape identification is called the moments of the image. This method is based on features extraction from a 2D intensity distribution of the image, together with information about the orientation and coordinates of the gravity center. In Figure 3 is presented the complete CAS's transport trajectory, from the last to the first P/RML workstation, from the detection and grabbing to the placing of the workpiece.

**Figure 2.** Structure of the CAS: ARS PeopleBot; RM Cyton 1500; eye in hand MVS.

**Figure 3.** P/RML assisted by CAS.

*2.3. Modelling and Control of MVS Based on the Moments of the Image Method*

The structure of the eye in hand MVS has the following components: a 7-DOF RM, a visual sensor, and an image-based controller presented in Figure 4.

**Figure 4.** Eye in hand MVS closed-loop control.

One of the most important parts of this design implementation, is that the imagebased controller, needs information about the environment of the system to minimize error between the current configuration of the visual features, *f* , and the desired configuration *f* ∗. For modelling the open loop servoing system, the fixed parts components of the RM and the visual sensor must be analyzed separately. The signal related with the input control of the RM is *υ*∗ *<sup>c</sup>* , and it represents the reference speed of the camera with the structure: *υ*∗ *<sup>c</sup>* = (*υ*∗, *ω*∗) *<sup>T</sup>* where *υ*<sup>∗</sup> = *υ*∗ *<sup>x</sup>*, *υ*<sup>∗</sup> *<sup>y</sup>*, *υ*<sup>∗</sup> *z T* and *ω*∗ = *ω*∗ *<sup>x</sup>*, *ω*<sup>∗</sup> *<sup>y</sup>* , *ω*<sup>∗</sup> *z T* are defined as the linear

and angular speed, respectively. The signal, *υ*∗ *<sup>c</sup>* is expressed in Cartesian space and must be transformed to be applied to the RM.

The posture is defined by the *υ*∗ *<sup>c</sup>* integration and is noted with *s* = [*s*1,*s*2,*s*3,*s*4,*s*5,*s*6] *T* defining the robot Jacobian as follows:

$$J\_r = \begin{bmatrix} \partial \mathbf{s}\_i / \partial q\_j \end{bmatrix}, \, i, j = 1, \ldots, 7,\tag{1}$$

where *qi*,*<sup>j</sup>* = 1, ... , 7 represents the RM's joints' states. The transformation of *υ*<sup>∗</sup> *<sup>c</sup>* from Cartesian space to robotic joint space is done with *J*−<sup>1</sup> *<sup>r</sup>* and the interaction matrix. The moments *mij* are a set of visual features with the analytic form for the time variation, . *mij*, resultant to the moments of order (*i* + *j*), differing depending on the speed of the visual sensor *υ*∗ *<sup>c</sup>* corresponding to the following equation:

$$
\dot{m}\_{i\dot{\jmath}} = L\_{m\_{i\dot{\jmath}}} \upsilon\_{\mathcal{E}\prime} \tag{2}
$$

where *Lmij* = *mυxmυymυzmωxmωym<sup>ω</sup><sup>z</sup>* is the interaction matrix.

The interaction matrix associated with a set of image moments *f* = [*xn*, *yn*, *an*, *τ*, *ξ*, *α*] *T* for *n* points is processed in this manner [12,13,24–26]

$$L\_f = \begin{bmatrix} -1 & 0 & 0 & a\_n e\_{11} & -a\_n (1 + e\_{12}) & y\_n \\ 0 & -1 & 0 & a\_n (1 + e\_{21}) & -a\_n e\_{11} & -x\_n \\ 0 & 0 & -1 & -e\_{31} & e\_{32} & 0 \\ 0 & 0 & 0 & \tau\_{\omega\_x} & \tau\_{\omega\_y} & 0 \\ 0 & 0 & 0 & \frac{x}{\tau\_{\omega\_x}} & \frac{x}{\tau\_{\omega\_y}} & 0 \\ 0 & 0 & 0 & \kappa\_{\omega\_x} & \kappa\_{\omega\_y} & -1 \end{bmatrix}. \tag{3}$$

The analytical form of the parameters from (3) can be found in [27].

#### *2.4. Assumptions and Task Planning*

The processing/reprocessing tasks can be simplified into a sequence of actions combined in parallel with workpieces positioning and transportation along the cells of the mechatronics line. Shown in Figure 5 is shown the task planning in the form of a block diagram with the actions done by the CAS when a workpiece has failed the quality test. Next, the workpiece is transported from the storage station (last cell) to the buffer station (first cell). The red lines represent the reprocessing tasks done if the piece has been considered faulty, and the black lines represent the normal processing assignments when an initial workpiece has been supplied.

**Figure 5.** Processing/Reprocessing cycle for a piece that has failed the quality test.

Because the P/RML on its basis is a flexible industrial production line, it is designed to easily adapt to changes in the type and quantity of the product and it has a high range of products that it can provide. Through reprocessing, the faulty products can be retrieved and reworked to the necessary quality standard. Since the technology utilized by the P/RML can be altered by aspects such as process modes, type of finished workpiece, and operations times, some assumptions are needed for the entire system to work as intended:

**Assumption 1.** *The manufacturing technology processes the workpieces in pipeline mode;*

**Assumption 2.** *The first station is provided with one piece at a time for processing;*

**Assumption 3.** *The initial conditions and parameters of the technology are known, such as quantity of pieces and task duration;*

**Assumption 4.** *Only one type of piece can be processed or reprocessed with different colors since the proposed processing operations are specific to a type of product;*

**Assumption 5.** *The number of the workstations involved in processing/reprocessing by the P/RML is previously known and unchanged;*

**Assumption 6.** *The workstations of the P/RML have a linear distribution in the following order: buffer, handling, processing, and storing;*

**Assumption 7.** *The processing/reprocessing tasks are executed on the same workstation and the pieces can be processed simultaneously in different stages;*

**Assumption 8.** *A red workpiece will mean that the quality test is not passed and reprocessing is needed;*

**Assumption 9.** *In the storage, the first level from the top is utilized for rejected workpieces;*

**Assumption 10.** *One CAS assists the P/RML, which is used for picking up, transporting, and releasing the workpieces;*

**Assumption 11.** *One eye in hand visual sensor is mounted on the RM Cyton;*

**Assumption 12.** *To avoid conflict, priority is given to the workpiece for reprocessing;*

**Assumption 13.** *The technology includes two quality tests, which detect between a scrap workpiece and one recoverable by reprocessing.*

#### **3. Direct and Inverse Kinematics Model of RM Cyton 1500**

RM Cyton 1500 offers a robust and precise manipulation for a wide variety of applications. It has been designed to simulate the structure of a human arm; the shoulder has three joints, the elbow has one joint, and the wrist has three joints. All the joints described make up the 7 DOF of the RM. Angle limits of the RM are shown in Table 1.


**Table 1.** 7-DOF RM Cyton 1500.

For testing and simulating the RM, kinematic modeling needs to be performed, with the main objective being the study of RM's mechanical parts, the direct and the inverse kinematics.

The direct kinematics consist of finding the position of the end-effector in space by knowing the movements of the joints, for example *F*(*θ*1, *θ*2,......, *θn*) = [*x*, *y*, *z*, *R*], and inverse kinematics consist of determining the value of every joint by knowing the position of the end effector and its orientation, *F*(*x*, *y*, *z*, *R*) = [*θ*1,........., *θn*].

A RM is composed, in general, of three joints, each defined by one or more degrees. In the case of RM Cyton 1500, it is made of:


Presented in Figure 6 is a compact diagram block of kinematic modeling [28].

**Figure 6.** Block diagram of kinematics models.

The configuration of the RM and its joints is presented in Figure 7 [29].

**Figure 7.** Angles that represent every joint, (**a**) Shoulder type, (**b**) Elbow type and (**c**) Wrist type.

One convention that is used for the selection of reference frames, especially in robotics applications, is represented by the Deavit&Hartenberg (D&H) convention, illustrated in Figure 8 [30], and parameters of the RM Cyton 1500, presented in Table 2.

**Figure 8.** Denavit&Hartenberg Convention.

**Table 2.** RM Cyton 1500 D&H parameters.


To determine the direct kinematic model, a fixed coordinate system with an index of 0 for the shoulder type joints and, for the other joints, a system with an index of *i*, *i* = 1 ... 7 have been placed. In this convention, every homogeneous transformation *Ai* is represented as a product of four basic transformations, where the four variables *θi*, *ai*, *di*, *α<sup>i</sup>* are the parameters associated with every link and joint.

$$A\_i = R\_{\mathbb{Z}, \theta\_i} Trans\_{\mathbb{Z}, d\_i} Trans\_{\mathbb{X}, a\_i} R\_{\mathbb{X}, a\_i \circ} \tag{4}$$

$$A\_{i} = \begin{bmatrix} c\theta\_{i} & -s\theta\_{i} & 0 & 0\\ s\theta\_{i} & c\theta\_{i} & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & d\_{i}\\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & a\_{i}\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & ca\_{i} & -sa\_{i} & 0\\ 0 & sa\_{i} & ca\_{i} & d\_{i}\\ 0 & 0 & 0 & 1 \end{bmatrix}$$

$$A\_{i} = \begin{bmatrix} c\theta\_{i} & -s\theta\_{i}c\alpha\_{i} & s\theta\_{i}s\alpha\_{i} & a\_{i}c\theta\_{i}\\ s\theta\_{i} & c\theta\_{i}c\alpha\_{i} & -c\theta\_{i}s\alpha\_{i} & a\_{i}s\theta\_{i}\\ 0 & s\alpha\_{i} & ca\_{i} & d\_{i}\\ 0 & 0 & 0 & 1 \end{bmatrix}. \tag{5}$$

With the help of the D&H table the *AI* matrices have been determined for every DOF of the RM:

$$A\_{1} = \begin{bmatrix} c\_{1} & 0 & s\_{1} & 0 \\ s\_{1} & 0 & -c\_{1} & 0 \\ 0 & 1 & 0 & d\_{1} \\ 0 & 0 & 0 & 1 \end{bmatrix}, \ A\_{2} = \begin{bmatrix} c\_{2} & -s\_{2} & 0 & a\_{2}c\_{2} \\ s\_{2} & c\_{2} & 0 & a\_{2}s\_{2} \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}',$$

$$A\_{3} = \begin{bmatrix} c\_{3} & 0 & s\_{3} & 0 \\ s\_{3} & 0 & -c\_{3} & 0 \\ 0 & 1 & 0 & d\_{3} \\ 0 & 0 & 0 & 1 \end{bmatrix}, \ A\_{4} = \begin{bmatrix} c\_{4} & -s\_{4} & 0 & a\_{4}c\_{4} \\ s\_{4} & c\_{4} & 0 & a\_{4}s\_{4} \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \ A\_{5} = \begin{bmatrix} c\_{5} & 0 & s\_{5} & 0 \\ s\_{5} & 0 & -c\_{5} & 0 \\ 0 & 1 & 0 & d\_{5} \\ 0 & 0 & 0 & 1 \end{bmatrix}, \tag{6}$$

$$A\_{6} = \begin{bmatrix} c\_{6} & -s\_{6} & 0 & a\_{6}c\_{6} \\ s\_{6} & c\_{6} & 0 & a\_{6}s\_{6} \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \ A\tau = \begin{bmatrix} c\_{7} & 0 & s\_{7} & a\_{7}c\_{7} \\ s\_{7} & 0 & -c\_{7} & a\_{7}c\_{7} \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}.$$

The direct kinematics model of the RM Cyton is determined by the decomposition of the seven matrices:

$$T\_1^0 = A\_{1\prime} \tag{7}$$

$$T\_7^0 = A\_1 A\_2 A\_3 A\_4 A\_5 A\_6 A\_7 = \begin{bmatrix} s\_x & n\_x & a\_x & d\_x \\ s\_y & n\_y & a\_y & d\_y \\ s\_z & n\_z & a\_z & d\_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{8}$$

Figure 9 illustrates an image representing all the joints and angles composing RM Cyton. The kinematic model obtained is utilized in the control structure of RM Cyton:

**Figure 9.** Joints and angles of 7-DOF RM Cyton 1500.

#### **4. Trajectory Tracking Sliding-Mode Control of ARS PeopleBot**

The ARS PeopleBot is controlled to follow a desired trajectory based on continuous time sliding-mode control. In TTSMC, the real ARS follows the trajectory of a virtual one, with the desired trajectory generated by the virtual ARS denoted with

$$q\_d(t) = \begin{bmatrix} \mathbf{x}\_d & \mathbf{y}\_d & \theta\_d \end{bmatrix}^T. \tag{9}$$

The kinematic model of the virtual ARS becomes

$$\begin{cases} \dot{\mathbf{x}} = \upsilon\_d \cos \theta\_d\\ \dot{\mathbf{y}} = \upsilon\_d \sin \theta\_d\\ \dot{\theta} = \omega\_d \end{cases} \tag{10}$$

where *xd* and *yd* represent the cartesian coordinates of the geometric center, *vd* represents the linear speed, *θ<sup>d</sup>* represents the orientation, and *ω<sup>d</sup>* represents the angular speed. When the ARS follows the desired trajectory, on X and Y axis appear tracking and orientation errors:

$$
\begin{bmatrix} \mathbf{x}\_{\ell} \\ \mathbf{y}\_{\ell} \\ \theta\_{\ell} \end{bmatrix} = \begin{bmatrix} \cos \theta\_{d} & \sin \theta\_{d} & 0 \\ -\sin \theta\_{d} & \cos \theta\_{d} & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} \mathbf{x}\_{r} - \mathbf{x}\_{d} \\ \mathbf{y}\_{r} - \mathbf{y}\_{d} \\ \theta\_{r} - \theta\_{d} \end{bmatrix}. \tag{11}
$$

The error dynamics are given by the following equations

$$\begin{cases} \dot{\mathbf{x}}\_{\mathfrak{c}} = -\upsilon\_{d} + \upsilon\_{r}\cos\theta\_{\mathfrak{c}} + \omega\_{d}y\_{\mathfrak{c}}\\ \dot{y}\_{\mathfrak{c}} = \upsilon\_{r}\sin\theta\_{\mathfrak{c}} - \omega\_{d}x\_{\mathfrak{c}}\\ \dot{\theta}\_{\mathfrak{c}} = \omega\_{r} - \omega\_{d} \end{cases} . \tag{12}$$

Since the ARS orientation is not perpendicular on the desired trajectory, it is assumed that |*θe*| < π/2. Given the errors' positions (11) and the derivatives (12), the sliding surfaces are

$$\begin{cases} s\_1 = \dot{\mathbf{x}}\_{\varepsilon} + k\_1 \mathbf{x}\_{\varepsilon} \\ s\_2 = \dot{y}\_{\varepsilon} + k\_2 y\_{\varepsilon} + k\_0 \text{sgn}(y\_{\varepsilon}) \theta\_{\varepsilon} \end{cases} \tag{13}$$

Presented in Figure 10 are the real and virtual ARS with absolute coordinates and orientation following a desired trajectory:

**Figure 10.** Real ARS, Virtual ARS, Desired trajectory.

The parameters *k*1, *k*2, *k*3 are positive and constant. If s1 converges to zero, then *xe* becomes 0 too, and if *<sup>s</sup>*<sup>2</sup> tends to 0, then . *y* = −*k*2*ye* + *k*0*sgn*(*ye*)*θe*. The surface derivatives

$$\begin{cases}
\dot{s}\_1 = \ddot{\mathbf{x}}\_{\mathfrak{e}} + k\_1 \dot{\mathbf{x}}\_{\mathfrak{e}} \\
\dot{s}\_2 = \ddot{\mathbf{y}}\_{\mathfrak{e}} + k\_2 \dot{\mathbf{y}}\_{\mathfrak{e}} + k\_0 \text{sgn}(\mathbf{y}\_{\mathfrak{e}}) \dot{\boldsymbol{\theta}}\_{\mathfrak{e}}^{\prime}
\end{cases}
\tag{14}$$

are written in a compact form:

$$
\dot{s} = -\mathbb{Q}s\mathbb{g}n(s) - \text{Ps}\_{\prime} \tag{15}
$$

where

$$Q = \begin{bmatrix} Q1 & 0 \\ 0 & Q2 \end{bmatrix}, \; P = \begin{bmatrix} P1 & 0 \\ 0 & P2 \end{bmatrix}, \; Q \ge 0, P \ge 0,\tag{16}$$

$$s = \begin{bmatrix} s\_1 & s\_2 \end{bmatrix}^T,\\ \text{sgn}(s) = \begin{bmatrix} \text{sgn}(s\_1) & \text{sgn}(s\_2) \end{bmatrix}^T. \tag{17}$$

Thus, from (11)–(14), the TTSMC law becomes as follows:

$$\dot{\upsilon}\_{\varepsilon} = \frac{-Q\_1 \dot{s} \dot{g} \eta(s\_1) - P\_1 \mathbf{s}\_1 - k\_1 \dot{\mathbf{x}}\_{\varepsilon} - \dot{\omega}\_d \mathbf{y}\_{\varepsilon} - \dot{\omega}\_d \dot{\mathbf{y}}\_{\varepsilon} + \upsilon\_r \theta\_{\varepsilon} + \dot{\upsilon}\_d}{\cos \theta\_{\varepsilon}},\tag{18}$$

$$\omega\_{\varepsilon} = \frac{-Q\_2 \text{sign}(s\_2) - P\_2 s\_2 - k\_2 \dot{y}\_{\varepsilon} - \dot{v}\_r \sin \theta\_{\varepsilon} + \dot{\omega}\_d \mathbf{x}\_{\varepsilon} + \omega\_d \dot{\mathbf{x}}\_{\varepsilon} + \omega\_d}{v\_r \cos \theta\_{\varepsilon} + k\_0 \text{sgn}(y\_{\varepsilon})} \tag{19}$$

#### **5. Real-Time Control of P/RML Assisted by CAS**

The assisting technology for P/RML consists of one dynamic robotic system, CAS, used in picking, placing, and transporting the workpieces.

The process is based on 3 main control loops:


**Figure 11.** Communication block set of the P/RML, CAS and computer.

Displayed in Figure 12 are a few pictures from real-time with the process of scanning the object, and, if it is detected, it is picked up by the gripper of the RM and transported with the CAS to the first workstation. The process with the main steps done in detecting the objects is presented in Figure 13. In Figure 13a is the conversion from the RGB (Red Green Blue) color model to the HSV (Hue Saturation Value) color model, as it is more robust to changes in light, since, during the experiments, there have been major effects with the type of light used, natural sunlight or laboratory light. Presented in Figure 13b is the detected object if the colors between the HSV limits have been met and the shape (in this case, a circle) has been found using the Ramer–Douglas–Peucker algorithm and Canny Edge detection, which are implemented by using the OpenCV libraries. Finally, in Figure 13c, the object is tracked if the conditions of both the color and shape have been met; the centroid is tracked by using the image moments method for its versatility and efficiency, as deviations appear only in a 2D space, the distance on the Z axis is constant, and the rotations of the object do not exist, since the workpiece always comes to the same place, only the CAS does not have a stable position and a MVS is needed [31].

**Figure 12.** Real-time control of the CAS for picking up workpieces with the following order: (**a**) home position, (**b**) scanning position (**c**) picking up object (**d**) home position with the workpiece picked up in the gripper.

**Figure 13.** Object Detection of the workpiece with the following steps: (**a**) Conversion from RGB to HSV (**b**) image segmentation after the color has been found between the HSV limits and the shape corresponding to object has been found (**c**) object color and shape has been found and is being tracked.

Likewise, presented in Figure 14 are pictures with a similar process to the previous figure, this time for placing the workpieces rather than pick them up. The biggest difference between Figures 13 and 15 consists of the fact that, in Figure 13, the object itself is detected while, in Figure 15, a reference point specific to the first workstation is detected, and the object is placed. Also, while in the first case the shape detected is a circle, in the second one the shape detected is a rectangle.

**Figure 14.** Real time control of the CAS for placing the workpieces with the following order:

(**a**) parking position (**b**) scanning position (**c**) placing the object (**d**) RM returning to the home position

with the object placed on the workstation.

**Figure 15.** Object Detection of the reference point with the following steps: (**a**) Conversion from RGB to HSV (**b**) image segmentation after the color has been found between the HSV limits and the shape corresponding to reference has been found (**c**) reference object color and shape has been found and is being tracked.

In Figure 16a, the complete 3D picking up trajectory is presented: the movement from the home position to the scanning, then above the object, picking it, and back to the home

position, so that the CAS can transport it to the first workstation for reprocessing if it fails the quality test. Presented in Figure 16b is the 3D trajectory for placing the object.

**Figure 16.** 3D Trajectories of the RM for (**a**) picking and (**b**) placing the workpiece.

Although the MVS starting and ending point are present in Figure 16a,b, they are hard to visualize, which is why Figure 17 is needed, in the case of picking the object, having the X and Z axis in Figure 17a and the Y on Figure 17b. The entire MVS process takes ~7 s and it is most apparent in the case of the Y axis, where there is a deviation that goes up to 10 mm from the desired position.

**Figure 17.** Trajectories evolution over time for (**a**) X and Z axis and (**b**) Y axis for picking the workpiece.

The time for placing the workpiece, based on MVS control, is about four s due to the fact that difference between the desired and actual features is much smaller than picking up. The X and Z trajectories over time are displayed in Figure 18a. In Figure 18b, only the Y axis trajectory is shown, since the distance is much smaller than the X and Y axis.

**Figure 18.** Trajectories evolution over time for (**a**) X and Z axis and (**b**) Y axis in the case of picking the workpiece.

The PeopleBot trajectory has some deviations on the X and Y axis that can be observed in Figure 19a. Along the X axis, the error is near zero, while on the Y axis the final error is about 4 mm, as shown in Figure 19b.

**Figure 19.** (**a**) Real and desired trajectories of the ARS PeopleBot and (**b**) evolution of the errors in time.

The velocity profile of the ARS PeopleBot along the forward trajectory, during transportation of the workpiece, is presented in the Figure 20a, with a maximum of 0.12 m/s. The state transition of the workpiece is shown in Figure 20b. While the RM is active for picking and placing the workpiece, the CAS has zero velocity.

**Figure 20.** (**a**) ARS PeopleBot velocity evolution over time; (**b**) state transition of the workpiece.

In Figure 21, the X and Y trajectory are displayed, so that the differences between the desired and real trajectories can be identified more easily.

**Figure 21.** ARS PeopleBot trajectory: (**a**) X axis; (**b**) Y axis Trajectories.
