**1. Introduction**

In the context of the evolutionary transition from the concept of Industry 4.0 to Industry 5.0 in modern industrial enterprises, it becomes necessary to intensify the processes of organizing production on the basis of collaborative robots [1], which have proven themselves in solving not only general industrial problems [2–5] but also in the processes of assisting a person in the piece and small-scale production of innovative products [6] in various industries: mechanical assembly, medicine, electronics production [7], etc.

There is a sufficient number of models of collaborative robots from the world's leading manufacturers, such as KUKA, ABB, FANUC, Kawasaki, and Yaskawa [8], that can be integrated into enterprise processes. However, in the basic configuration, collaborative robots supplied to enterprises cannot unleash the potential of a possible synergistic effect of interaction between a person and a robot, and, as a rule, the collaborative property is reduced only to ensuring the safety of a person in the same workspace with a robot inside a cyberphysical system. The synthesis of control systems for a collaborative process only on the basis of a robot equipped with an internal sensor system (a complex of force–torque

**Citation:** Gorkavyy, M.; Ivanov, Y.; Sukhorukov, S.; Zhiganov, S.; Melnichenko, M.; Gorkavyy, A.; Grabar, D. Improving Collaborative Robotic Complex Efficiency: An Approach to the Intellectualization of the Control System. *Eng. Proc.* **2023**, *33*, 18. https://doi.org/10.3390/ engproc2023033018

Academic Editors: Askhat Diveev, Ivan Zelinka, Arutun Avetisyan and Alexander Ilin

Published: 13 June 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Komsomolsk-na-Amure State University, Komsomolsk-na-Amur 681013, Russia

sensors), which makes it possible to implement mechanisms for organizational and technical avoidance of collisions between the robot and scene objects, is difficult due to the limitations of these means of measuring and evaluating the internal environment in the working space of the robot. The resulting solutions will not be highly efficient due to the impossibility of carrying out control according to the forecast, while control by mistake will entail additional time and energy costs and will not guarantee the achievement of the desired result in principle (bringing the tool center point (TCP) of the robot to the position required by the technological operation). Improving the efficiency of control systems for cyberphysical systems (CPS) based on cobots is possible due to the expansion of sensory tools (in particular vision systems) [9] and intelligent modules that plan and optimize the trajectory movements of a robot taking into account changes in the external environment [1,10].

In this paper, the authors propose an approach to the formation of structural and functional models of a collaborative robotic complex equipped with an intelligent control system with the possibility of self-learning and additional training in order to develop multimodal adaptive algorithms and methods for controlling the behavior of collaborative robotic systems, taking into account emergency situations and extreme conditions in a nondeterministic environment, as well as an approximate assessment of the economic potential of solutions for expanding the control system of the KUKA LBR iiwa 7 collaborative robot, using the example of a typical technological operation in the event of a collision.

In carrying out the study, the authors used the basic approaches of control theory and elements of mathematical methods of the vector–matrix description of control systems and methods of system analysis, including methods of structural and functional decomposition; elements of the proposed solutions are confirmed on the basis of the results of an experiment performed on industrial equipment from KUKA Robotics.

#### **2. An Approach to the Intellectualization of the Control System**

Figure 1 shows a block diagram of the control of a cyberphysical system (CPS) consisting of a decision maker (HS), a collaborative robot (CS) and a library of behavior models (BMs) inside a standard robot control system.

**Figure 1.** Structural and functional diagram of the CPS control system.

A typical CPS operates in two main modes: implementation of a technological process and debugging of a technological process (performing individual operations, generating a program code, setting up a strategy for responding to data from the internal sensor system (CR) of a collaborative robot).

The vector–matrix description of the linear part [11] of the cyberphysical system control system can be presented below (for some cases, it can also include additional methods such as [12,13]).

$$\begin{cases} \dot{\mathbf{x}}(t) = \mathbf{A} \cdot \mathbf{x}(t) + \mathbf{B}\_{\text{ul}\_1} \cdot \mathbf{u}(t) + \mathbf{B}\_{\text{ul}\_2} \cdot \mathbf{u}\_M(t) + \mathbf{B}\_{M\_1} \cdot f\_{MHS}(t) + \mathbf{B}\_{M\_2} \cdot f\_{ME}(t) \\ \qquad y\_{CS}(t) = \mathbf{C}\_{MS} \cdot \mathbf{x}(t) \\ y\_{HS}(t) = \mathbf{C}\_{EsS} \cdot \mathbf{s}(t) + \eta(t) \end{cases} \tag{1}$$

*Procedure* : *TaskClass* =>< *objTask*, *objTaskModelTune*(...), *objTaskModelSelect*(...) > (2)

where *A*—functional matrix of the state of the object (power block); *Bu*<sup>1</sup> , *Bu*2—control matrix; *BM*<sup>1</sup> , *BM*2—disturbance matrices; *η*(*t*)—noise vector; *yCS*(*t*), *yHS*(*t*)—measure vectors. ⎡ *ϕi* ⎤

$$\mathbf{x}(t) = [\mathbf{x}\_1(t), \mathbf{x}\_2(t) \dots \mathbf{x}(t)]^T, \text{where } \mathbf{x}\_i(t) = \begin{bmatrix} \omega\_i \\ \vdots \\ \varepsilon\_i \\ \varepsilon\_i \\ \vdots \end{bmatrix}'$$

$$\begin{array}{l} \mathsf{u}(t) = [\mathsf{u}\_{1}(t), \mathsf{u}\_{2}(t) \dots \mathsf{u}\_{7}(t)]^{T} \text{---} \text{control law}; \\\mathsf{u}\_{M}(t) = [\mathsf{u}\_{M\_{1}}(t), \mathsf{u}\_{M\_{2}}(t) \dots \mathsf{u}\_{M\_{7}}(t)]^{T} \text{---} \text{torque}; \\\ f\_{MHS}(t) = [f\_{MHS\_{1}}(t), f\_{MHS\_{2}}(t) \dots f\_{MHS\_{7}}(t)]^{T} \text{---} \text{torque}; \\\ f\_{ME}(t) = [f\_{ME\_{1}}(t), f\_{ME\_{2}}(t) \dots f\_{ME\_{7}}(t)]^{T} \text{---} \text{torque}; \\\ \mathsf{s}\_{CS}(t) = [\mathsf{s}\_{CS\_{1}}(t), \mathsf{s}\_{CS\_{2}}(t) \dots \mathsf{s}\_{CS\_{u}}(t)]^{T} \text{'} \end{array}$$

where *n* → ∞ (all points of the robot surface) *sCSi*(*t*) = ⎡ ⎣ *XCSi YCSi ZCSi*

In accordance with the task, which is a set of documentation describing the technological process, HS selects a preinstalled *Mi* model, if any, or generates a new one via channel (6). The model includes a set of movement trajectories CS, speed modes, types of movement, levels of activation of the (internal) sensory system and algorithms for responding to activation in the event of a signal (9). CS implements movements according to *u* (channel (7)) generated by BM. The values of the state variables *x* (channel (8)) of the collaborative system are available to both the SCS and HS (channel (4)) measurement system (MS). In addition, some of the state variable CSs are approximately estimated using the natural human senses (vision, touch and hearing) via the channel (5) estimate system (EsS). This feedback option, in addition to monitoring the general state of the process, is used when forming/correcting a model from BMs by physical impact on the robotic arm (channel (3)) and moving it to the desired point while maintaining its coordinates in the model. The internal sensor system and algorithms for responding to sensors in BMs allow CSs to "compensate" for disturbing influences *f* = *fME* + *fMHS* within a limited range. With the standard configuration of the CS, the response to disturbance occurs "by mistake", and the built-in sensor system in the online mode does not allow forming an idea of an obstacle in statics, or, moreover, in dynamics. These facts do not allow the CPS control system to guarantee the restoration of the motion algorithm after the appearance of a disturbance. In addition, such a system, in which the role of the control unit is performed by a person, is difficult to optimize, especially according to a complex system of criteria, for example, including speed and energy efficiency.

⎤ ⎦.

Moreover, an important factor that reduces the efficiency of the CPS (Figure 1) is the possibility of generating a disturbance of the HS itself (channel (1)), both in the process debugging mode and in the execution mode.

The presented problems and limitations inherent in the standard CPS, which are presented in most of the volume of collaborative systems of industrial enterprises, do not allow building a full-fledged synergistic system [6] or organizing effective human–machine interaction in an innovative technological process, taking into account emergency situations in a nondeterministic environment.

The most promising solution for improving the CPS, according to [1,14], is to add to the typical CPS system (Figure 1) the means of "feeling" the collaborative machine: an extended (multimodal) sensor system [9] and advanced intelligent algorithms as part of distributed system of analysis and control [1].

This research by the authors is aimed at the synthesis of control laws within the CPS and aimed at changing the values of state variables not only of CSs but also of HS, through the formation of advice or instructions. At the same time, it is proposed to associate the interface of an intelligent control system with CS, thereby creating the illusion of sensing and animating a collaborative machine [15] to bot.

The approach to control is proposed to be implemented according to the structural and functional diagram compiled by the authors (Figure 2), which expands the diagram of Figure 1 with an intelligent control system as part of a subsystem for collecting, recognizing and primary data analysis, a subsystem for generating an intelligent logical inference of a control law and a subsystem for organizing an impact on HS, including interface.

**Figure 2.** Structural and functional diagram of an intelligent control system for a cyberphysical system.

When forming the control law, the intelligent system relies on the optimization mechanisms of the intelligent module (IM), among which the key ones are minimizing the length of movement trajectories, minimizing the execution time of a technological operation, minimizing the number of avoidance of collisions, minimizing energy consumption [16–18] and others, as well as their combinations within a weighted system of criteria.

A description of the functions of the presented blocks as part of an intelligent control system for a cyberphysical system is presented below:

(1) IM—Dynamic synthesis of the CPS performance criteria system based on the selected BM prototype and data from the Data Collect and Image recognizing (DCIR) block (environmental changes), prediction of the CPS state change, BM adjustment/synthesis, resulting change *u*, DCIR control and formation of prescriptions and scenarios for an emotional system (ES).

(2) DCIR—Data collection of video and audio streams, recognition of key points of the operator (HS) and other objects in the 3D space of the working area and prediction of the direction vector (displacement) of key points.

(3) Effector system (EsS)—Implementation of the impact on HS through sound, graphic, tactile or a combination of these effects in the physical and virtual interfaces (VR/AR); the impact is implemented both in the mode of information and in the mode of recommendations for actions, for example, in the AR instruction format. There should also be a regime for archiving data and providing it to a higher level of management in order to control the behavior of HS.

Thanks to the introduction of an intelligent system with external sensors, two channels become available for collecting and analyzing information—(11) and (12)—which determines the geometry of physical objects (an operator from HS and any external objects) that cause or may create disturbances in the future *f MHS* and fme, respectively. The DCIR module provides the IM with arrays of key points of objects, as well as a prediction of their displacement vector, thereby allowing the intelligent module, taking into account the knowledge of the CS behavior model and its current state, to quickly correct the behavior model or switch it. Therefore, the system implements control by prediction or in a hybrid mode: a combination of a series of control actions by error and by prediction (if collisions cannot be avoided).

In addition, due to the simultaneous availability of channel (12) with the archive of HS action patterns in the IM, it seems possible to detect anomalies in human behavior that are counterproductive, generating (causing) *f MHS* disturbance through channel (1), caused by the emotional component of the ES system HS. Thus, the DCIR–IM–EFS interaction can be eliminated, or the influence of *f MHS* significantly reduced, by eliminating the perturbation generator itself in HS via channel (12) or by promptly correcting the model in SCS and forming a new control law (channel (7)), correcting the state CS even before the triggering of its standard sensor system.

In addition to solving the problems of avoiding collisions, the IM must take into account the efficiency criteria for CS operation established in BMs, including models for increasing energy efficiency and models for minimizing the time of execution of a technological operation.

In order to justify the technical and economic feasibility of promising work, an experiment was conducted on the basis of a draft prototype of an intelligent system for planning the trajectory movements of an industrial robot equipped with force–torque sensors. The experiment was aimed at identifying differences between the values of the integral indicators of the technological process (the possibility of avoiding a collision, the total duration of the operation, taking into account the time of avoiding a collision and the energy consumed in this case) of positioning the tool by a robot under the control of a system built according to the scheme in Figure 1—option 1, as well as the system corresponding to the scheme in Figure 2—option 2.

Figure 3a shows the trajectory of the tool movement by the robot in space (option 1) in the absence of an obstacle on the path of movement; the values of the execution time and energy consumed obtained in this case are presented in the first column of Table 1. Figure 3b shows the trajectory of the tool movement by the robot in space (option 1) under the conditions of the occurrence of a static obstacle of an unknown shape (disturbance *fME* (Figure 2)) and the system working off the collision by the "probing" method; the values of the execution time and energy consumed are presented in column 2 of the Table 1. Figure 3c shows the trajectory of the tool movement by the robot in space (option 2) under the conditions of the occurrence of a static obstacle of unknown shape (disturbance *fME*

(Figure 2)), recognized by the DCIR module and working out by the collision avoidance system based on the planned movement trajectory by the IM; the resulting execution times and consumed energy are presented in columns 3–7 in Table 1. It should be noted that the trajectories were obtained using only one criterion—collision avoidance. It can be seen from the data that the operation execution time and the energy consumed at the same time are different for the five implementations in option 2, which demonstrates the possibility and expediency of connecting to the problem of avoiding collisions, criteria for ensuring reduction in operation time, energy efficiency, or their weighted combinations. It is also worth noting that option 1 cannot always provide avoidance of collisions and may cause (in the case of standard behavior algorithms of the standard system) the activation of the standby mode, which will increase the execution time of the operation for an indefinite period, or an emergency stop (according to signals from current sensors).

(**a**) the operator sets how to get around the

obstacle (**b**) automatic search for the passage of obstacles

(**c**) using computer vision to pass an obstacle




The obvious (Figure 4) difference between the results obtained during the operation of a standard control system and an expanded one (more than twice as good results in energy and time) allows us to conclude that it is expedient to expand standard collaborative robots with external intelligent systems, which will positively affect the efficiency of the production process.

**Figure 4.** The results of measurements of the time of passage by the robot from point A to point B.

#### **3. Conclusions**

The approach proposed by the authors to the intellectualization of industrial cyberphysical systems conceptually embeds the main promising areas of research of the authors: the development of algorithms for multimodal analysis of the scene (space of the working area) in real time, including the detection, classification and prediction of the behavior of the operator and objects—the functionality of the DCIR block; development of methods for the formation of CS movement trajectories and their operational correction based on the results of the analysis of external disturbances (learning algorithms)—the functionality of the IM block—and development of a dialogue and effector system that increases the overall efficiency of CPS by influencing HS—the functionality of the EFS block—into the overall CPS control system, presented in the form of a structural–functional diagram (Figure 2).

The results of a simplified experiment based on a simulation prototype of an intelligent system demonstrate significant physical and economic effects (from 15% to 182% depending on the operation) that can be obtained under the conditions of production processes when standard collaborative solutions are completed with an extended sensor system and intelligent modules for optimizing trajectory movements, according to the proposed approach, which determines the possibility of assuming the feasibility of promising research in the field of improving intelligent scene methods and optimizing the movement of collaborative robots in a nondeterministic environment.

In the future, it is planned to produce a seminatural experiment on the industrial collaborative system KUKA LBR iiwa 7 R800 using moving objects that have a disturbing effect. A promising area of research is the implementation of the DCIR module using neural network algorithms for pattern recognition.

**Author Contributions:** Conceptualization, M.G. and Y.I.; investigation, M.M.; methodology, M.G. and Y.I.; software, D.G.; validation, S.S., S.Z. and M.M.; writing—original draft preparation, M.G.; writing—review and editing, A.G. and D.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was supported by the Russian Science Foundation (project No. 22-71-10093).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
