Next Article in Journal
Extremely Efficient DFB Lasers with Flat-Top Intra-Cavity Power Distribution in Highly Erbium-Doped Fibers
Next Article in Special Issue
Robot Programming from a Single Demonstration for High Precision Industrial Insertion
Previous Article in Journal
Fuzzy Logic Controlled Simulation in Regulating Thermal Comfort and Indoor Air Quality Using a Vehicle Heating, Ventilation, and Air-Conditioning System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stable Heteroclinic Channel Networks for Physical Human–Humanoid Robot Collaboration

Department of Automatics, Biocybernetics and Robotics, Jožef Stefan Institute, Jamova Cesta 39, 1000 Ljubljana, Slovenia
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(3), 1396; https://doi.org/10.3390/s23031396
Submission received: 29 November 2022 / Revised: 10 January 2023 / Accepted: 18 January 2023 / Published: 26 January 2023
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation II)

Abstract

:
Human–robot collaboration is one of the most challenging fields in robotics, as robots must understand human intentions and suitably cooperate with them in the given circumstances. But although this is one of the most investigated research areas in robotics, it is still in its infancy. In this paper, human–robot collaboration is addressed by applying a phase state system, guided by stable heteroclinic channel networks, to a humanoid robot. The base mathematical model is first defined and illustrated on a simple three-state system. Further on, an eight-state system is applied to a humanoid robot to guide it and make it perform different movements according to the forces exerted on its grippers. The movements presented in this paper are squatting, standing up, and walking forwards and backward, while the motion velocity depends on the magnitude of the applied forces. The method presented in this paper proves to be a suitable way of controlling robots by means of physical human-robot interaction. As the phase state system and the robot movements can both be further extended to make the robot execute many other tasks, the proposed method seems to provide a promising way for further investigation and realization of physical human–robot interaction.

1. Introduction

Everyday movements such as sitting, standing, running, jumping, handling and manipulating different objects, and many others are fairly simple for healthy humans to perform. Furthermore, there are many more complicated tasks that humans perform on daily basis in collaboration with other humans, such as dancing, carrying different objects, or pushing and pulling while holding each other to provide help and assistance for unhealthy or disabled individuals. While physically interacting with each other, humans can easily understand mutual intentions and can adapt instantaneously to the other person’s behavior to suitably cooperate and provide the necessary assistance. This is possible because humans are capable of interpreting body motion and different gestures typical for human non-verbal communication [1,2]. Moreover, if speech is added to the communication between humans, cooperation becomes even smoother.
While performing all possible tasks, humans can subconsciously coordinate their bodies and take care of their stability to fully control their motion. In addition, while physically interacting with other individuals, humans can not only mutually adapt to their collaborators but also take care of his/her stability. This allows humans to smoothly cooperate and complete different kinds of tasks even if no prior agreement on the course of the task alone was made.
To assist humans, robots are being employed in different fields of industry and science [3,4], healthcare and medicine [5,6,7], hospitality and tourism [8,9,10], as domestic robots [11] and in many other areas. For such applications, humanoid robots are of special interest as they have similar body shapes as humans and could, therefore, learn to cooperate with them by mimicking human-like behavior. However, their disadvantage is their relatively small support polygon in comparison to their large body size, which makes them easily unstable when fast movements are performed, or external forces are applied, as is the case in physical human–robot interaction. This is why every movement or physical interaction with the environment represents a challenge for them to remain stable and not fall. To ensure stability to humanoid and other floating-based robots, i.e., robots that are not fixed to a surface, while performing different types of movements and physically interacting with the environment, various methods and stability parameters mostly based on the zero moment point [12,13,14,15,16,17,18,19,20,21,22], but also on the linear and angular momenta of the robot [23,24,25], gravito-inertial wrench cones for non-coplanar contacts [26,27], stability polygons [28], or model predictive control [29], have been developed. This enables an enormous advance not only in the development of humanoid robot motion techniques such as walking [30,31,32,33,34], running and jumping [35], skiing [36], and many others, but also in the development of skills for physical cooperation with humans [37], object grasping and manipulation [38,39,40,41,42,43] and other types of physical interaction with the environment. A review of the state-of-the-art advances in bipedal walking robots and their hardware design can be found in [44,45].
Robots mostly perform tasks on their own, and their physical cooperation with humans is very limited. Furthermore, most of the tasks performed by the robots are pre-programmed and pre-defined in advance; some robots are teleoperated by humans in real-time, although reactive control algorithms are also being extensively developed. However, robots do not have the feeling of mutual physical interaction with humans to keep each other balanced and optimize the interaction in a manner to assist humans in the best possible way. This is why physical collaboration tasks that robots can perform using the state-of-the-art reactive controllers are limited and very specific, as robots can not understand human intentions in an arbitrary scenario, and it is, therefore, impossible for them to assist humans in all everyday situations. It is, thus, crucial to study human behavior and motion to understand people’s intentions, as this dictates the course of the interaction [46,47,48]. Even though human–robot interaction and cooperation are still in their initial phases, it is one of the fastest-growing fields in robotics as different methods and strategies are being developed and applied to robots to make them understand human intentions and physically interact with them [49,50,51,52,53,54,55,56,57].
One way to make robots collaborate with humans is using the stable heteroclinic channel (SHC) networks [58,59,60,61]. This is a continuous dynamical phase state system (PSS) connecting an arbitrary number of saddle points that can be interpreted as states, as shown in Figure 1.
The transitions between them are smooth; they can be reversed, and most importantly, they have a phase. Such an SHC system can be coupled to different robot motions, where each state corresponds to a certain final robot configuration, while the robot motion from the initial to the final state is coupled to the phase of the SHC transition [62].
In this paper, first, the SHC networks are defined and presented on a simple model consisting of three states. Second, the SHC networks are implemented in a more complex dynamical model consisting of eight states. In the end, the eight-state SHC networks system is coupled to the motion of a humanoid robot, where the state transitions are triggered by the forces applied to the grippers of the humanoid robot. By pushing or pulling the grippers of the humanoid robot up or down, a person triggers different SHC state transitions and, this way, controls the motion of the robot. According to the SHC states, the humanoid robot stands still, squats, stands back up, and walks forwards or backwards.
With the coupling of the SHC networks to the robot motion, it is shown how this dynamical model can be used to guide robots and make them cooperate with humans. Although for the tasks presented in this paper, the robot motion was pre-defined, the investigated approach confirms to be a promising way for further development of human–robot collaboration techniques.
In Materials and Methods first, the SHC networks are defined and applied to a simple three-state PSS. Further on, the application of an eight-state PSS guided by the SHC networks to the motion of a humanoid robot is presented, and the robot motion synthesis is described. In Results, all the possible PSS transitions resulting in robot squatting, standing up, and walking forwards and backward are presented and described. In Discussion and Conclusions, the application of the presented PSS to a humanoid robot is validated in the means of human–robot interaction and cooperation, and the directions for further development are given.

2. Materials and Methods

2.1. The Definition of the SHC Networks

The method that controls the robot motion presented in this paper is built on the SHC networks [58,59,60,61]. This is a dynamical PSS that connects saddle points which can be interpreted as states, each lying on its coordinate axis. The transitions between states occur via self-stabilizing limit cycles, each with its own velocity, activation time, and phase, and can be arbitrarily allowed or prohibited. Furthermore, the transitions are nonexclusive, so more transitions can be initialized at the same time. In addition, the transitions can also be reversed, and the system returns to the initial state. A representation of a three-state SHC network is presented in Figure 1.
The dynamical PSS built on the SHC networks can be for a n-dimensional state vector x defined as
x ˙ = x α + ρ 0 + ρ Δ ( T + G ) x γ η ( t ) + δ ˙ ( t ) ,
where ∘ denotes element-wise multiplication [61]. The matrix T is the state transition matrix with values T j i = 1 if the transition i j exists and T j i = 0 otherwise, while the matrix G is the greediness matrix which determines the probability of the system to maintain multiple transitions activated, but it can also gradually reverse a transition. The n × n matrix
ρ 0 = α β 1 T I 1 ( α α T ) 1
defines the saddle points, each located on its coordinate axis, while the matrix product ρ Δ T determines the stable heteroclinic channels. The matrix ρ Δ is also of the dimensions n × n and is defined as
ρ Δ = α 1 + ν 1 β 1 T .
The three n-dimensional parameter vectors α , β , and ν determine the growth rates, the state’s positions, and the shapes of the saddle points, respectively. The matrices ρ 0 and ρ Δ maintain constant values, while the matrices T and G can be modified during the evolution of the dynamical system. The scalars γ and η ( t ) determine the location of the channels and adjust the evolution rate of the dynamical system, respectively, while δ ˙ ( t ) determines when the system will leave a state and pushes the system away from the saddle points along a certain transition.
As the states lie on each coordinate axis, they form an orthonormal basis. Each transition determined by the SHC networks, therefore, lies on the plane extending among the coordinate axes containing its corresponding initial and final state. The activation function for all possible transitions that is invariant to the scaling of x and can have values from the interval [ 0 , 1 ] can, therefore, be determined as
Λ t = 16 x x T | x 2 | ( x 1 T + 1 x T ) 2 + | x | 2 T ,
where 1 is an n-dimensional vector of ones. The state activations of the dynamical system can be computed from the residual of the transition activation function as
λ s = x 2 x 2 1 Λ t .
As the diagonal elements of the transition matrix Λ t are meaningless, an activation matrix Λ containing all the state transitions and activations, with the property Λ j i = 1 , can be determined as
Λ j i = Λ j i t ; j i λ i s ; j = i .
With regard to the fact that transitions determined by the SHC networks lie on planes defined by their corresponding predecessor and successor states, a phase with a value from the interval [ 0 , 1 ] can be assigned to each active transition from the state i to the state j as
Φ j i = | x j | | x j | + | x i | .
The phase has the value 0 when thy system is in its initial state and reaches the value 1 when the system evolves to its final state.
The transition of the system to the state j can be triggered by applying a small positive value to δ ˙ j ( t ) . On the contrary, a small negative value applied to δ ˙ j ( t ) prohibits the corresponding transition from occurring. To determine the velocity biases for each transition, δ ˙ ( t ) can be defined as
δ ˙ ( t ) = ( Λ B ) x ,
where each element B j i of the bias matrix B specifies the velocity bias of the transition i j . If more than one velocity bias from a certain state has a positive value, multiple transitions will be triggered at once, but the attractor’s shape will make the system converge towards only one final state.
To enable a continuous modification of the transition velocity, the scaling factor η ( t ) can be expressed as
η ( t ) = 2 A j i Λ j i .
The parameter A speeds up or slows down a transition relative to the speed determined by the vector α and can be modified in every iteration of the PSS. For the default transition speed, the value of A should be set to 0.

2.2. A Three-State PSS Guided by the SHC Networks

For a better representation of how a PSS based on the SHC networks behaves, a simple three-state model is presented. The possible transitions between the states of the presented model are 1 2 , 2 1 , 2 3 and 3 1 , as shown in Figure 2, so the state transition matrix is
T = 0 1 1 1 0 0 0 1 0 .
All the elements of the greediness matrix G were set to 0 so that simultaneous multiple transitions were not started. The values of the vectors α , β and ν were α i = 20 , β i = 1 , and ν i = 1 , respectively, for i [ 1 , 3 ] , which means that the growth rates and the saddle point shapes were all the same, while the saddle points were located each on its coordinate axis at the value 1. The parameter γ was set to 1, so the transitions directly left their corresponding initial state and approached their final state. In the beginning, the elements of the bias matrix were set to B j i = 10 4 , so that all the transitions were blocked, while A was set to 0 so that the speed of the evolution of the system was no modified. Both the matrix B and the parameter A were modified during the simulation.
The time evolution of the three-state SHC system is shown in Figure 3, where the step size of the model integration was set to d t = 10 2 . The initial state of the system was set to 1 and, therefore, x ( t = 0 ) = ( 1 , 0 , 0 ) . At the time t = 2 s , the bias matrix B was modified so that B 21 = 10 1 and, therefore, the system transition to the state 2 was triggered. But at the time t = 2.2 s , the bias matrix B was modified so that B 12 = 10 3 . With this modification, the system was forced to return to state 1 immediately. After returning to state 1, all the elements of the velocity bias matrix were set to the initial value B j i = 10 4 , so the system was forced to stay in state 1.
At the time t = 4 s , the bias matrix element for the transition from state 1 to state 2 was again modified to B 21 = 10 1 and, therefore, the transition of the system towards the state 2 resumed. When the system reached the final state, the state vector became x = ( 0 , 1 , 0 ) .
At t = 6 s , the system was pushed towards the state 3, so all the elements of the bias matrix B were set to 10 4 , except B 32 = 10 1 . Furthermore, to decelerate the speed of the transition, the modification A = 10 was made. As can be seen in Figure 3, this transition between states 2 and 3 was slower compared to the previous transition from states 1 and 2. When the system reached state 3, the state vector became x = ( 0 , 0 , 1 ) and bot matrices A and B were set to the initial values.
At t = 8 s , the system was sent towards the state 1 by setting B 13 = 10 1 . After 0.2 s, the system was sent back to state 3 by setting B 31 = 10 3 . When the system was back at state 3, all the elements of B were again set to the initial values so that the system remained in the current state.
The next transition occurred at time t = 10 s , when the system transited to state 1 by setting B 13 = 10 1 . At the end of the transition, the state vector became x = ( 1 , 0 , 0 ) and all the elements of the matrix B were again set to 10 4 to fix the system in the current state.
At t = 12 s , the transition towards the state 2 was activated by setting B 21 = 10 1 . However, 0.03 s after its activation, the velocity of the transition was decreased by setting A = 50 . At the time t = 14 s , the velocity of the transition was modified again; this time by increasing it relative to the reference velocity with A = 10 . The modifications in the transition velocities can be seen in Figure 3. When the system reached state 2, the state vector become x = ( 0 , 1 , 0 ) and both matrices A and B were set to their initial values.
Lastly, at t = 17 s , the value B 12 = 10 1 was applied, and the system transited one more time to the state 1, and the state vector once again became x = ( 1 , 0 , 0 ) .
Figure 3 shows the time evolution of the discussed PSS controlled by the SHC networks as it was guided by the parameters that control its behavior. The system transits between all three states according to the transitions allowed by the matrix T . Furthermore, some transitions were also reversed during their course. In addition, the velocities of the transitions were also modified before and during the transitions.

2.3. Application of the PSS Guided by the SHC Networks to the Humanoid Robot

Due to its maneuverability, a dynamical PSS controlled by the SHC networks is a suitable model for guiding robotic systems and controlling their motion. During the transition from an initial to a final state, the phase of the corresponding PSS transition, defined by Equation (7), increases from 0 to 1. If the robot motion is parametrized, it can be coupled to the phase of the PSS, and this way, the robot moves according to the phase of the corresponding PSS transition. When the phase of the PSS transition is 0, the robot is in its initial state, as it is the PSS itself. However, as the phase of the PSS is modified, the robot modifies its configuration and finally reaches its final gait when the phase of the transition is 1, and the system reaches its final state.

2.3.1. The Motion Tasks and Their Corresponding Movements

In this paper, the dynamical PSS controlled by the SHC networks is employed to guide the humanoid robot Talos with 32 degrees of freedom that can perform human-like motion [63]. The robot movements presented in this paper constitute three different tasks, as shown in Figure 4.
Each state of the PSS is connected to a specific robot configuration, while the transitions between the PSS states, which cause the robot motion and the execution of a certain task, are triggered by the forces applied to the wrist grippers of the humanoid robot, measured by the force-torque sensors mounted in the grippers.
In the first task, the humanoid robot moves its forearms, while the PSS transits between states 1 and 2, as shown in Figure 4. When the PSS is in state 1, the humanoid robot is standing on its legs and has its forearms oriented downwards, as can be seen in Figure 5a.
When a vertical pulling force oriented upwards, F up , is applied to its grippers, the humanoid robot starts to lift up its forearms, and the PSS starts to transit to state 2. When state 2 is reached, the humanoid robot has its forearms oriented horizontally, as shown in Figure 5b, in the same way as humans would when they are assisted by other people that help them to move. If no forces are further applied, F = 0 , after a certain amount of time, the PSS transits back to state 1, and the humanoid robot puts its forearms down.
The second task comprises two movements, as presented in Figure 4. In the first movement, the humanoid robot squats, as shown in Figure 5c, while the PSS transits from state 2 to state 3. In the second movement, the humanoid robot stands up with its forearms oriented horizontally, while the PSS returns to state 2. The transition of the PSS from state 2 to state 3 is triggered by a vertical pushing force oriented downwards, F down , applied on the grippers of the humanoid robot. If no forces are further applied when the PSS reaches state 3, and the humanoid robot is in the squatting position, both the PSS and the humanoid robot remain in this state forever. On the other hand, the transition from state 3 back to state 2 is triggered by a vertical pulling force oriented upwards, F up , and the humanoid robot stands up with its forearms oriented horizontally. If no forces are further applied when state 2 is reached after a certain amount of time, the PSS transits back to state 1, and the humanoid robot puts its forearms down again and, therefore, performs the movement of the first task.
In the third task, the humanoid robot walks forwards and backwards, and, therefore, multiple movements are employed, while the PSS transits between states 2, 4, 5, 6, 7, and 8, as schematically depicted in Figure 4. The initial humanoid robot configuration of the third task is the standing up position with its forearms oriented horizontally, corresponding to state 2 of the PSS. If a pulling force oriented horizontally, F pull , is applied on the grippers of the humanoid robot, the PSS transits from state 2 to state 4, and the humanoid robot performs the first step forwards with its left leg, seen in Figure 5d. For the transitions between states 4–8, the following principles hold. If a horizontal pulling force, F pull , is applied on the grippers of the humanoid robot, the PSS transits to the state that makes the humanoid robot perform a step forwards. If a horizontal pushing force is applied on the grippers of the humanoid robot, F push , the PSS transits to the state that makes the humanoid robot perform a step backwards. Figure 5e shows the humanoid robot with its right leg forwards or left leg backward, which correspond to states 5 or 8 of the PSS, while Figure 5f shows the humanoid robot with its left leg forwards or right leg backward, corresponding to states 7 or 6 of the PSS, respectively. However, if no force is applied when the PSS is in any of the states 4–8, after some time, the PSS returns to state 2, and the humanoid robot returns to the corresponding standing position with its forearms oriented horizontally. Still, if no force is applied to the humanoid robot, after some time, the PSS transits back to state 1, and the humanoid robot puts its forearms down, which is the movement of the first task.

2.3.2. Motion Synthesis

The movements of the humanoid robot were synthesized in different ways. The movement of the first task, where the humanoid robot lifts up its forearms, is a simple movement as only the angles of the elbows were modified. All the movements of the second and the third task are much more complex as multiple joint angles need to be simultaneously modified, each with a different non-constant velocity. Furthermore, throughout every movement from the initial to the final position, the stability of the humanoid robot had to be granted.
The movement of the first task, which corresponds to the PSS transition from state 1 to state 2, was created in the Gazebo simulator [64], where the humanoid robot was first positioned in its default standing position of the default controllers. Guided by a Matlab script, by specifying only the final angles of the elbows, the humanoid robot moved its forearms to the final position.
The squatting and the standing up motions of the second task, which correspond to the transitions from state 2 to state 3 and back to state 2 of the PSS, were obtained by applying human motion, which was recorded with a motion capture system as shown in Figure 6, to the humanoid robot.
The movements for the walking steps of the third task, which represent the transitions between states 2 and 4–8 of the PSS, were obtained by a humanoid robot walking simulation in the Gazebo simulator. Here the humanoid robot was first positioned in the standing-up pose with its forearms oriented horizontally, corresponding to state 2 of the PSS. Then, with the integrated ros service command do_step of the walking controller, which is a ros control plugin and can run with the default controllers, the humanoid robot performed the first step forward with its left leg, two more steps forward with both legs, and a half step back with its right leg to the initial position with both legs positioned in parallel one next to the other. All the other steps were obtained as the reverse of these recorded steps.

2.3.3. Motion Parametrization

During every movement of the humanoid robot in the Gazebo simulator, all its joint angles were recorded and later parametrized using the Gaussian radial basis functions (GRBF) method [65]. With this parametrization, the time dependence of a joint angle during a certain movement is transformed to phase dependence, τ [ 0 , 1 ] , while the recorded angles are expressed with a combination of N Gaussian functions
Ψ i ( τ ) = exp ( τ c i ) 2 2 σ i 2
as
Φ GRBF ( τ ) = i ω i Ψ i ( τ ) i Ψ i ( τ ) ,
where c i and σ i are the centers of the peak and the standard deviation of the i-th Gaussian function, respectively, ω i is the weight for the i-th Gaussian function, while i goes from 1 to an arbitrary N.
An example of the GRBF method is shown in Figure 7 for the knee joint angle of the humanoid robot during the standing-up motion. As can be seen, the Gaussian functions sum up to parametrize the values of the knee joint angle throughout the whole movement. Moreover, if N is chosen correctly, the GRBF method cancels out the oscillations of the angle that can be attributed to the vibrations of the robot.
With such a parametrization of all joint angles, their values can be obtained for every τ . Furthermore, the phase of the transition of the PSS guided by the SHC networks can be coupled to the phase of the GRBF parametrization of the robot joint angles. This way, the phase of the PSS can directly determine the robot configuration and guide its motion.

3. Results

Application of the PSS Guided by the SHC Networks to the Humanoid Robot

The dynamical PSS guided by the SHC networks, described in Section 2.1, was applied to guide the motion of the humanoid robot, as described in Section 2.3.
To enable the transitions between the PSS states as presented in the sketch in Figure 4, the state transition matrix was set to
T = 0 1 0 0 0 0 0 0 1 0 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 1 0 .
The matrix G , the vectors α , β , and ν , along with the step size d t and the parameter γ of the PSS were all set to the same values as for the three-state system presented in Section 2.2. The values of the bias matrix were set for the transition from the state i to the state j in the following way: for the transition that was blocked, B j i = 10 3 , for the transition that was triggered, B j i = 10 1 , for the transition that was triggered after some time, B j i = 10 3 , while for the transition that was reversed, B i j = 10 .
A transition was triggered if forces in an absolute value larger than 15 N were applied in the horizontal or in the vertical direction to both grippers of the humanoid robot, and if forces applied in this direction could trigger a transition from the current state of the PSS. However, when pulling or pushing the humanoid robot for its grippers, forces were always applied in both the vertical and the horizontal direction simultaneously. The considered direction was, therefore, chosen as the direction in which the sum of the absolute values of the forces in both grippers was larger. On the other hand, if the PSS was in state 2 or in states 4–8 and no forces were exerted on the humanoid robot, the transition to state 1 or to state 2, respectively, was automatically triggered after some time. Moreover, if during a transition, forces were applied in the opposite direction of the forces that triggered the transition, the transition was reversed, and the PSS returned to the initial state. It should be emphasized that after a transition was triggered and if it was not reversed, the PSS reached the final state also if no forces applied in the same direction as the ones that triggered the transition, were exerted. However, if during a transition, forces were applied in the same direction as the ones that triggered it, the velocity of the transition was increased.
The velocities of the transitions between different PSS states and, therefore, between different configurations of the humanoid robot were modulated according to Equation (9) by the parameter A = 1.5 + 10 2 F grip 1.4 , where F grip = 0.5 ( F L + F R ) , with F L and F R being the forces applied to the left and right gripper of the humanoid robot, respectively, in the direction that triggered the current movement. A demonstration of how forces influence the velocities of PSS transitions is shown in Figure 8, where in the beginning, the PSS transits from state 1, in which the humanoid robot is standing with its forearms oriented downwards, to state 2, in which the humanoid robot lifts its forearms in the horizontal position. Further on, the PSS transits between states 2 and 3, while the humanoid robot transits between the squatting and the standing-up position. In the end, the PSS returns to state 1, and the humanoid robot puts its forearms back down. The transitions between states 2 and 3 were triggered with different forces, and, therefore, the transitions occurred at different velocities. The first two transitions from state 2 to state 3 and back to state 2 were triggered at times 1.8 s and 3.8 s, respectively, by vertical forces with absolute values of approx. 20 N, and, therefore, both transitions lasted approximately 1 s. The third transition, from state 2 to state 3, was triggered at 5.1 s by vertical forces of approx. −40 N, while the transition lasted approx. 0.5 s. The vertical force that triggered the fourth transition from state 3 to state 2, starting at 6.3 s, was approx. 60 N, while the transition lasted approx. 0.2 s. The last two transitions between states 2 and 3 had varying velocities as the forces acting on the grippers of the humanoid robot had varying magnitudes throughout the transition. The vertical force triggering the first of the last two transitions at 7.2 s was approx. −25 N and increased to −65 N, which caused the acceleration of the transition velocity. On the other hand, the force triggering the last transition from state 3 to state 2 at 8.4 s was the first of approximately 55 N, but later ceased, and, therefore, the transition of the PSS started with an elevated velocity but later decreased to its default value.
A demonstration of all possible PSS transitions, along with the horizontal and vertical forces exerted on the grippers of the humanoid robot, is shown in Figure 9.
In the beginning, the 1st task is demonstrated as initially, the PSS was in state 1, and the humanoid robot was standing with its forearms oriented downwards. After applying a pulling vertical force oriented upwards to the humanoid robot, the system started at 0.3 s to transit to state 2, while the humanoid robot started lifting its forearms. However, during the transition, a pushing vertical force was applied to the grippers of the humanoid robot, and, therefore, the PSS returned to state 1, and the humanoid robot put its forearms back down. Later on, the system was pushed to state 2 by reapplying a pulling vertical force oriented upwards, and at time 1.5 s, the humanoid robot started to lift its forearms in a horizontal position.
To continue, the 2nd task is presented. Pushing vertical forces oriented downwards were applied on the humanoid robot standing with its forearms oriented horizontally, and at 2.3 s, the PSS started to transit to state 3 while the humanoid robot squatted. When the humanoid robot started to stand up at 3.2 s, as pulling forces were applied, an opposite force was exerted on its grippers, so the PSS transited back to state 3, and the humanoid robot returned to the squatting position. At 4.5 s, pulling forces were again applied, so the PSS transited back to state 2, and the humanoid robot stood up with its forearms oriented horizontally.
Finally, from 5.5 s on, the transitions of the 3rd task occurred, and the humanoid robot performed walking movements. As it can be seen in Figure 9, pulling forces exerted in the horizontal direction caused such transitions of the PSS that made the humanoid robot perform steps forward, while the transitions of the PSS caused by pushing forces in the horizontal direction made the humanoid robot perform steps backward. A transition worth emphasizing was the one at time 9.8 s when the PSS was in state 5, and a pushing horizontal force started its transition to state 6, while the humanoid robot started moving back with its right leg positioned forwards. During this transition, a pulling horizontal force was exerted on the humanoid robot, so the PSS reversed its transition back to state 5, and the humanoid robot returned its right leg forwards. However, when no forces were exerted on the humanoid robot and the PSS was in states 4–8, a spontaneous transition to state 2 occurred. This happened at times 7.5 s, 12.4 s, 16.3 s, 19.9 s, and 28.2 s and made the humanoid robot put its legs one next to the other while keeping its forearms in the horizontal position.
Each time a different force was applied to the grippers of the humanoid robot. The velocities of the transitions that are triggered by these forces were, therefore, different for every transition. Moreover, as the applied forces were modified and also ceased during various transitions, the transition velocities were also changing during the courses of the transitions. See the Supplementary Materials for a sample video of how the humanoid robot is guided.

4. Discussion

Robots are being employed in many different areas to assist humans with various tasks. Humanoid robots that can mimic human behavior and can perform human-like motions are due to the human–robot skill transfer methods and techniques of special interest. However, understanding human intentions and performing safe and at the same time efficient cooperation is very challenging. For this reason, the area of human–robot collaboration is still in its early days, while cooperation between humans and robots is still very limited. Nevertheless, this area of robotics has undergone tremendous advance in the last decades, and engaging experts from various fields of science are evolving at an incredible speed.
In this paper, human–robot collaboration is addressed with the application of a dynamical PSS guided by the SHC networks to the humanoid robot. Such a dynamical model can smoothly transit between its states with varying velocity, while each transition can be reversed, and the system can return to its initial state. As every PSS transition has its phase that increases from 0 when the PSS is in its initial state, to 1 when the PSS reaches its final state, it was coupled to the robot that moves from its initial to its final configuration. This way, the PSS controls the robot gate throughout the movement. The transitions between PSS states can occur automatically or can be triggered by certain events or conditions that must be fulfilled, depending on the executed task.
The dynamical PSS presented in this paper consists of eight states linked to eight different configurations of the humanoid robot. Some of the transitions between the discussed PSS states are triggered by the forces applied to the grippers of the humanoid robot, while some of them occur automatically if no forces are applied. Such an approach allows humans to guide a humanoid robot in the same way as they would guide another person when assisting him/her in moving. Furthermore, the larger the applied forces, the faster the movement of the robot, and vice versa. This allows humans to dynamically control the motion of the humanoid robot and even reverse its motion toward its initial state if forces in the opposite direction are applied.
The largest advantage of using a PSS controlled by the SHC networks for guiding a robot is its simplicity of application. All the possible transitions are determined by a single matrix T , while a specific transition is triggered by the forces applied in a certain direction, which determines the matrix B . In the model presented in this paper, vertical forces make the humanoid robot squat and stand up, while horizontal forces make it walk forwards and backward. However, the PSS used to control the motion presented in this paper could be extended to many other states and movements of the robot.
The movements of the humanoid robot presented in this paper were pre-determined and parameterized using the GRBF method. The squatting and standing-up movements were obtained by filming a person and applying his reconstructed joint angles to the humanoid robot, while the walking movements were obtained with a computer simulation in the Gazebo simulator. The use of pre-defined motion is also the largest limitation of the presented implementation of the discussed dynamical PSS, as the robot is limited to the pre-recorded movements. Nevertheless, the presented approach provides an important step towards human–robot collaboration and interaction as the robot must understand human intentions, expressed with the forces applied on its grippers, to perform the desired movement. Moreover, for the modulation of the motion velocity, a further interpretation of the human will is done in terms of considering the magnitude of the applied forces. With such an understanding of human intentions, the next step could be to adapt and improve the robot motion with the application of learning algorithms and to further synthesize new robot movements according to the needs of the human collaborator.
In this paper, two sample tasks, squatting and walking, were performed by the humanoid robot in collaboration with a human, who was leading the robot by pulling and pushing it in the horizontal and vertical direction for its grippers. However, the range of robot movements, along with the guiding mechanisms and signals that the robot can understand from the human leader, can increase by an undetermined amount. This way human–robot collaboration can evolve and expand to many different areas while robots can learn to cooperate in different ways and help humans to perform a numberless amount of tasks. Furthermore, with the inclusion of the learning algorithms for the improvement of the robot motion and execution of different tasks, the human–robot collaboration could become even smoother and faster, while the robots could take a major role in the collaborative tasks.

5. Conclusions

With rising costs and decreasing availability of human labor, the demand for robots and robotic assistants increases on daily basis in many different areas. However, despite the enormous advance the field of assistive robotics has undergone in the last decades, the cooperation between humans and robots is still very limited. For this reason, new methods for understanding human intentions and controlling assistive robots, while providing a safe environment, need to be explored and developed.
In this paper, a promising approach for realizing human–robot collaboration is presented. The motion of the humanoid robot is controlled by a dynamical PSS consisting of 8 states, guided by the SCH networks. Such a mathematical model is easy to implement and simple to control, as only a handful of matrices and parameters need to be determined and modified throughout the evolution of the model. As the transitions between the PSS states are triggered and modulated by the directions and amplitudes of the forces exerted on the grippers of the humanoid robot by a human leader, the robot must understand human intentions to execute the desired task. This is the reason why the presented strategy represents a big step towards human–robot collaboration, as it combines the understanding of human intentions with the execution of the desired tasks in the desired way. However, as there is an infinite amount of tasks that a robot can perform, such a dynamical PSS can be expanded and upgraded for collaborative robots to assist humans in many different areas such as industry, healthcare, medicine, hospitality and tourism, home assistance, and many more.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s23031396/s1, A video of the humanoid robot Talos guided by a person and performing the motions presented in this paper.

Author Contributions

Conceptualization, T.P.; methodology, T.P.; software, T.P. and T.B.; validation, T.B.; formal analysis, T.B.; investigation, T.B.; resources, T.P.; data curation, T.B.; writing—original draft preparation, T.B.; writing—review and editing, T.P.; visualization, T.B.; supervision, T.P.; project administration, T.P.; funding acquisition, T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Slovenian Research Agency grant number N2-0153.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the reason that no medical and psychological measurements were done.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

All the data presented in this paper can be provided by the authors of this paper upon request.

Acknowledgments

The authors would like to thank Luka Mišković and Ana Gabriela Kostanjevec, who provided technical support for the measurements.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
PSSPhase State System
SHCStable Heteroclinic Channel
GRBFGaussian Radial Basis Functions

References

  1. Behne, T.; Carpenter, M.; Call, J.; Tomasello, M. Unwilling versus unable: Infants’ understanding of intentional action. Dev. Psychol. 2005, 41, 328–337. [Google Scholar] [CrossRef]
  2. Carpenter, M.; Nagell, K.; Tomasello, M.; Butterworth, G.; Moore, C. Social cognition, joint attention, and communicative competence from 9 to 15 months of age. Monogr. Soc. Res. Child Dev. 1998, 63, i+iii+v–vi+1–174. [Google Scholar] [CrossRef]
  3. Hentout, A.; Aouache, M.; Maoudj, A.; Akli, I. Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Adv. Robot. 2019, 33, 764–799. [Google Scholar] [CrossRef]
  4. Mišković, D.; Milić, L.; Čilag, A.; Berisavljević, T.; Gottscheber, A.; Raković, M. Implementation of Robots Integration in Scaled Laboratory Environment for Factory Automation. Appl. Sci. 2022, 12, 1228. [Google Scholar] [CrossRef]
  5. Loh, E. Medicine and the rise of the robots: A qualitative review of recent advances of artificial intelligence in health. BMJ Leader 2018, 2, 59–63. [Google Scholar] [CrossRef]
  6. Karabegović, I.; Doleček, V. The Role of Service Robots and Robotic Systems in the Treatment of Patients in Medical Institutions. In Advanced Technologies, Systems, and Applications; Hadžikadić, M., Avdaković, S., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 9–25. [Google Scholar]
  7. Kasina, H.; Bahubalendruni, M.R.; Botcha, R. Robots in Medicine: Past, Present and Future. Int. J. Manuf. Mater. Mech. Eng. (IJMMME) 2017, 7, 44–64. [Google Scholar] [CrossRef]
  8. Kopacek, P. Robots in Entertainment, Leisure and Hobby New Tasks for Robot Control. IFAC Proc. Vol. 2000, 33, 539–543. [Google Scholar] [CrossRef]
  9. Fusté-Forné, F.; Jamal, T. Co-Creating New Directions for Service Robots in Hospitality and Tourism. Tour. Hosp. 2021, 2, 43–61. [Google Scholar] [CrossRef]
  10. Huang, M.H.; Rust, R.T. Artificial Intelligence in Service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  11. Yang, C.; Zhang, X. Research into the application of AI robots in community home leisure interaction. J. Supercomput. 2022, 78, 9711–9740. [Google Scholar] [CrossRef]
  12. Vukobratovič, M.; Juričić, D. Contribution to the synthesis of biped gait. IEEE Trans. Bio-Med. Eng. 1969, 16, 1–6. [Google Scholar] [CrossRef]
  13. Vukobratovič, M.; Borovac, B. Zero-Moment Point—Thirty Five Years of its Life. Int. J. Humanoid Robot. 2004, 1, 157–173. [Google Scholar] [CrossRef]
  14. Sardain, P.; Bessonnet, G. Forces acting on a biped robot. Center of pressure-zero moment point. IEEE Trans. Syst. Man Cybern.-Part A Syst. Humans 2004, 34, 630–637. [Google Scholar] [CrossRef]
  15. Petrič, T.; Gams, A.; Babič, J.; Žlajpah, L. Reflexive stability control framework for humanoid robots. Auton. Robot. 2013, 34, 347–361. [Google Scholar] [CrossRef]
  16. Caron, S.; Pham, Q.C.; Nakamura, Y. ZMP Support Areas for Multi-contact Mobility Under Frictional Constraints. IEEE Trans. Robot. 2017, 33, 67–80. [Google Scholar] [CrossRef] [Green Version]
  17. Brecelj, T.; Petrič, T. Angular Dependency of the Zero Moment Point. Adv. Serv. Ind. Robot. 2021, 102, 135–144. [Google Scholar]
  18. Brecelj, T.; Petrič, T. Application of the Angular Dependency of the Zero Moment Point. In Proceedings of the 8th International Conference on Electrical, Electronics and Computing Engineering IcETRAN 2021 and 65th National Conference on Electronics, Telecommunication, Computing, Automatic Control and Nuclear Engineering ETRAN 2021, Bijeljina, Bosnia and Herzegovina, 8–10 September 2021; pp. 591–596. [Google Scholar]
  19. Brecelj, T.; Petrič, T. Zero Moment Line—Universal Stability Parameter for Multi-Contact Systems in Three Dimensions. Sensors 2022, 22, 5656. [Google Scholar] [CrossRef] [PubMed]
  20. Goswami, A. Postural Stability of Biped Robots and the Foot-Rotation Indicator (FRI) Point. Int. J. Robot. Res. 1999, 18, 523–533. [Google Scholar] [CrossRef]
  21. Sugihara, T.; Nakamura, Y.; Inoue, H. Real-time humanoid motion generation through ZMP manipulation based on inverted pendulum control. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 2, pp. 1404–1409. [Google Scholar] [CrossRef]
  22. Morisawa, M.; Benallegue, M.; Cisneros, R.; Kumagai, I.; Escande, A.; Kaneko, K.; Kanehiro, F. Multi-Contact Stabilization of a Humanoid Robot for Realizing Dynamic Contact Transitions on Non-coplanar Surfaces. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 2252–2258. [Google Scholar] [CrossRef]
  23. Kajita, S.; Kanehiro, F.; Kaneko, K.; Fujiwara, K.; Harada, K.; Yokoi, K.; Hirukawa, H. Resolved Momentum Control: Humanoid Motion Planning Based on the Linear and Angular Momentum. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 2, pp. 1644–1650. [Google Scholar] [CrossRef]
  24. Cisneros, R.; Yokoi, K.; Yoshida, E. Yaw moment compensation by using full body motion. In Proceedings of the 2014 IEEE International Conference on Mechatronics and Automation, Tianjin, China, 3–6 August 2014; pp. 119–125. [Google Scholar] [CrossRef]
  25. Dai, H.; Valenzuela, A.; Tedrake, R. Whole-body motion planning with centroidal dynamics and full kinematics. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 295–302. [Google Scholar] [CrossRef]
  26. Caron, S.; Pham, Q.C.; Nakamura, Y. Leveraging Cone Double Description for Multi-contact Stability of Humanoids with Applications to Statics and Dynamics. Robot. Sci. Syst. 2015, 11, 1–9. [Google Scholar] [CrossRef]
  27. Caron, S.; Pham, Q.C.; Nakamura, Y. Stability of Surface Contacts for Humanoid Robots: Closed-Form Formulae of the Contact Wrench Cone for Rectangular Support Areas. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar] [CrossRef] [Green Version]
  28. Audren, H.; Kheddar, A.; Gergondet, P. Stability polygons reshaping and morphing for smooth multi-contact transitions and force control of humanoid robots. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 1037–1044. [Google Scholar] [CrossRef] [Green Version]
  29. Henze, B.; Ott, C.; Roa, M.A. Posture and balance control for humanoid robots in multi-contact scenarios based on Model Predictive Control. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 3253–3258. [Google Scholar] [CrossRef]
  30. Hirukawa, H.; Hattori, S.; Harada, K.; Kajita, S.; Kaneko, K.; Kanehiro, F.; Fujiwara, K.; Morisawa, M. A universal stability criterion of the foot contact of legged robots—Adios ZMP. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, ICRA 2006, Orlando, FL, USA, 15–19 May 2006; pp. 1976–1983. [Google Scholar] [CrossRef]
  31. Petrič, T.; Žlajpah, L.; Garofalo, G.; Ott, C. Walking Control Using Adaptive Oscillators Combined with Dynamic Movement Primitives. In Proceedings of the 22nd International Workshop on Robotics in Alpe-Adria-Danube Region, Portorož, Slovenia, 11–13 September 2013. [Google Scholar]
  32. Yu, Z.; Ming, C.L.; Dinesh, M.; Albertus, H.A.; Chee-Meng, C. A Walking Pattern Generator for Biped Robots on Uneven Terrains. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010. [Google Scholar]
  33. Caron, S.; Kheddar, A. Multi-contact walking pattern generation based on model preview control of 3D COM accelerations. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 550–557. [Google Scholar] [CrossRef] [Green Version]
  34. Baskoro, A.S.; Priyono, M.G. Design of humanoid robot stable walking using inverse kinematics and zero moment point. In Proceedings of the 2016 International Electronics Symposium (IES), Denpasar, Indonesia, 29–30 September 2016; pp. 335–339. [Google Scholar] [CrossRef]
  35. Nagasaka, K.; Kuroki, Y.; Suzuki, S.; Itoh, Y.; Yamaguchi, J. Integrated motion control for walking, jumping and running on a small bipedal entertainment robot. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, ICRA’04, New Orleans, LA, USA, 26 April–1 May 2004; Volume 4, pp. 3189–3194. [Google Scholar] [CrossRef]
  36. Lahajnar, L.; Kos, A.; Nemec, B. Skiing robot—Design, control, and navigation in unstructured environment. Robotica 2009, 27, 567–577. [Google Scholar] [CrossRef] [Green Version]
  37. Shin, D.; Sardellitti, I.; Park, Y.L.; Khatib, O.; Cutkosky, M. Design and Control of a Bio-inspired Human-friendly Robot. Int. J. Robot. Res. 2010, 29, 571–584. [Google Scholar] [CrossRef] [Green Version]
  38. Harada, K.; Kajita, S.; Kaneko, K.; Hirukawa, H. Dynamics and balance of a humanoid robot during manipulation tasks. IEEE Trans. Robot. 2006, 22, 568–575. [Google Scholar] [CrossRef]
  39. Nikolic, M.; Borovac, B.; Raković, M. Dynamic balance preservation and prevention of sliding for humanoid robots in the presence of multiple spatial contacts. Multibody Syst. Dyn. 2018, 42, 197–218. [Google Scholar] [CrossRef]
  40. Sentis, L.; Park, J.; Khatib, O. Compliant Control of Multicontact and Center-of-Mass Behaviors in Humanoid Robots. IEEE Trans. Robot. 2010, 26, 483–501. [Google Scholar] [CrossRef] [Green Version]
  41. Harada, K.; Hirukawa, H.; Kanehiro, F.; Fujiwara, K.; Kaneko, K.; Kajita, S.; Nakamura, M. Dynamical balance of a humanoid robot grasping an environment. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 2, pp. 1167–1173. [Google Scholar] [CrossRef] [Green Version]
  42. Audren, H.; Vaillant, J.; Kheddar, A.; Escande, A.; Kaneko, K.; Yoshida, E. Model preview control in multi-contact motion-application to a humanoid robot. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4030–4035. [Google Scholar] [CrossRef] [Green Version]
  43. Mummolo, C.; Peng, W.Z.; Gonzalez, C.; Kim, J.H. Contact-Dependent Balance Stability of Biped Robots. J. Mech. Robot. 2018, 10, 021009. [Google Scholar] [CrossRef] [Green Version]
  44. Mikolajczyk, T.; Mikołajewska, E.; Al-Shuka, H.F.N.; Malinowski, T.; Kłodowski, A.; Pimenov, D.Y.; Paczkowski, T.; Hu, F.; Giasin, K.; Mikołajewski, D.; et al. Recent Advances in Bipedal Walking Robots: Review of Gait, Drive, Sensors and Control Systems. Sensors 2022, 22, 4440. [Google Scholar] [CrossRef]
  45. Ficht, G.; Behnke, S. Bipedal Humanoid Hardware Design: A Technology Review. Curr. Robot. Rep. 2021, 2, 201–210. [Google Scholar] [CrossRef]
  46. Strabala, K.; Lee, M.K.; Dragan, A.; Forlizzi, J.; Srinivasa, S.S.; Cakmak, M.; Micelli, V. Toward Seamless Human-Robot Handovers. J. Hum.-Robot Interact. 2013, 2, 112–132. [Google Scholar] [CrossRef] [Green Version]
  47. Dragan, A.; Srinivasa, S. Generating Legible Motion. In Proceedings of the Robotics: Science and Systems, Berlin, Germany, 24–28 June 2013. [Google Scholar] [CrossRef]
  48. Nikolaidis, S.; Hsu, D.; Srinivasa, S. Human-robot mutual adaptation in collaborative tasks: Models and experiments. Int. J. Robot. Res. 2017, 36, 618–634. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Sheridan, T.B. Human–Robot Interaction: Status and Challenges. Hum. Factors 2016, 58, 525–532. [Google Scholar] [CrossRef] [PubMed]
  50. De Santis, A.; Siciliano, B.; De Luca, A.; Bicchi, A. An atlas of physical human–robot interaction. Mech. Mach. Theory 2008, 43, 253–270. [Google Scholar] [CrossRef] [Green Version]
  51. Kragic, D.; Gustafson, J.; Karaoguz, H.; Jensfelt, P.; Krug, R. Interactive, Collaborative Robots: Challenges and Opportunities. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, 13–19 July 2018; pp. 18–25. [Google Scholar]
  52. Vollmer, A.; Schillingmann, L. On Studying Human Teaching Behavior with Robots: A Review. Rev. Philos. Psychol. 2018, 9, 863–903. [Google Scholar] [CrossRef]
  53. Akgun, B.; Thomaz, A. Simultaneously Learning Actions and Goals from Demonstration. Auton. Robot. 2016, 40, 211–227. [Google Scholar] [CrossRef]
  54. Baraglia, J.; Cakmak, M.; Nagai, Y.; Rao, R.P.N.; Asada, M. Initiative in robot assistance during collaborative task execution. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 67–74. [Google Scholar]
  55. Bütepage, J.; Black, M.; Kragic, D.; Kjellström, H. Deep representation learning for human motion prediction and classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1591–1599. [Google Scholar]
  56. Hu, Y.; Chen, S.; Zhang, Y.; Gu, X. Collaborative Motion Prediction via Neural Motion Message Passing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  57. Fridovich-Keil, D.; Bajcsy, A.; Fisac, J.F.; Herbert, S.L.; Wang, S.; Dragan, A.D.; Tomlin, C.J. Confidence-aware motion prediction for real-time collision avoidance1. Int. J. Robot. Res. 2020, 39, 250–265. [Google Scholar] [CrossRef]
  58. Laurent, G.; Stopfer, M.; Friedrich, R.W.; Rabinovich, M.I.; Volkovskii, A.; Abarbanel, H.D. Odor Encoding as an Active, Dynamical Process: Experiments, Computation, and Theory. Annu. Rev. Neurosci. 2001, 24, 263–297. [Google Scholar] [CrossRef] [Green Version]
  59. Rabinovich, M.; Huerta, R.; Varona, P.; Afraimovich, V. Transient Cognitive Dynamics, Metastability, and Decision Making. PLoS Comput. Biol. 2008, 4, e1000072. [Google Scholar] [CrossRef] [Green Version]
  60. Horchler, A.D.; Daltorio, K.A.; Chiel, H.J.; Quinn, R.D. Designing responsive pattern generators: Stable heteroclinic channel cycles for modeling and control. Bioinspir. Biomin. 2015, 10, 026001. [Google Scholar] [CrossRef] [PubMed]
  61. Deimel, R. Reactive Interaction Through Body Motion and the Phase-State-Machine. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 6383–6390. [Google Scholar]
  62. Petrič, T.; Žlajpah, L. Phase State System for Generating Interactive Behaviors for Humanoid Robots. In Advances in Service and Industrial Robotics; Müller, A., Brandstötter, M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 496–503. [Google Scholar]
  63. Talos. Available online: https://pal-robotics.com/robots/talos/ (accessed on 17 January 2023).
  64. Gazebo. Available online: https://gazebosim.org/home (accessed on 17 January 2023).
  65. Figueiredo, M. On Gaussian radial basis function approximations: Interpretation, extensions, and learning strategies. In Proceedings of the 15th International Conference on Pattern Recognition, ICPR-2000, Barcelona, Spain, 3–7 September 2000; Volume 2, pp. 618–621. [Google Scholar] [CrossRef]
Figure 1. A representation of a PSS built on the SHC networks consisting of three states, each lying on its coordinate axis and connected with multiple SHC.
Figure 1. A representation of a PSS built on the SHC networks consisting of three states, each lying on its coordinate axis and connected with multiple SHC.
Sensors 23 01396 g001
Figure 2. A schematic representation of a PSS guided by the SHC networks consisting of three states and the possible transitions between them.
Figure 2. A schematic representation of a PSS guided by the SHC networks consisting of three states and the possible transitions between them.
Sensors 23 01396 g002
Figure 3. The time evolution of the three-state SHC system. The blue lines represent the system states, while the red lines represent the transitions from the initial to the final states and were obtained from the evolution phases from Equation (7).
Figure 3. The time evolution of the three-state SHC system. The blue lines represent the system states, while the red lines represent the transitions from the initial to the final states and were obtained from the evolution phases from Equation (7).
Sensors 23 01396 g003
Figure 4. A schematic representation of the dynamical PSS consisting of 8 states, guided by the SHC networks, with the corresponding positions of the humanoid robot and the conditions for the transitions between the PSS states to occur. The numbered circles represent the PSS states with the corresponding humanoid robot positions. The arrows between the circles represent the possible PSS transitions, while the forces next to the arrows are the conditions that trigger the corresponding transition. The blue (dashed), red (finely dashed), and green (dotted) rectangles encompass the PSS states, the transitions between them, and the configurations of the humanoid robot for the first, second, and third motion task, respectively.
Figure 4. A schematic representation of the dynamical PSS consisting of 8 states, guided by the SHC networks, with the corresponding positions of the humanoid robot and the conditions for the transitions between the PSS states to occur. The numbered circles represent the PSS states with the corresponding humanoid robot positions. The arrows between the circles represent the possible PSS transitions, while the forces next to the arrows are the conditions that trigger the corresponding transition. The blue (dashed), red (finely dashed), and green (dotted) rectangles encompass the PSS states, the transitions between them, and the configurations of the humanoid robot for the first, second, and third motion task, respectively.
Sensors 23 01396 g004
Figure 5. (af) Photos of the humanoid robot Talos in gaits that correspond to different states of the PSS guided by the SHC networks, as described in the text.
Figure 5. (af) Photos of the humanoid robot Talos in gaits that correspond to different states of the PSS guided by the SHC networks, as described in the text.
Sensors 23 01396 g005
Figure 6. Photo sequence of a person sitting down on a bench while being recorded by the motion capture system.
Figure 6. Photo sequence of a person sitting down on a bench while being recorded by the motion capture system.
Sensors 23 01396 g006
Figure 7. The knee joint angle of the humanoid robot during the standing-up motion as a function of phase τ . The blue empty circles are the recorded knee joint angles, the black curves are the Gaussian functions, and the red curve is the GRBF parametrization from Equation (12) of the knee joint angle, composed from the Gaussian curves from Equation (11).
Figure 7. The knee joint angle of the humanoid robot during the standing-up motion as a function of phase τ . The blue empty circles are the recorded knee joint angles, the black curves are the Gaussian functions, and the red curve is the GRBF parametrization from Equation (12) of the knee joint angle, composed from the Gaussian curves from Equation (11).
Sensors 23 01396 g007
Figure 8. Transitions of the PSS between states 1–3 as a function of time with the forces exerted on the grippers of the humanoid robot.
Figure 8. Transitions of the PSS between states 1–3 as a function of time with the forces exerted on the grippers of the humanoid robot.
Sensors 23 01396 g008
Figure 9. Transitions of the PSS between states 1–8 as a function of time with the forces exerted on the grippers of the humanoid robot.
Figure 9. Transitions of the PSS between states 1–8 as a function of time with the forces exerted on the grippers of the humanoid robot.
Sensors 23 01396 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brecelj, T.; Petrič, T. Stable Heteroclinic Channel Networks for Physical Human–Humanoid Robot Collaboration. Sensors 2023, 23, 1396. https://doi.org/10.3390/s23031396

AMA Style

Brecelj T, Petrič T. Stable Heteroclinic Channel Networks for Physical Human–Humanoid Robot Collaboration. Sensors. 2023; 23(3):1396. https://doi.org/10.3390/s23031396

Chicago/Turabian Style

Brecelj, Tilen, and Tadej Petrič. 2023. "Stable Heteroclinic Channel Networks for Physical Human–Humanoid Robot Collaboration" Sensors 23, no. 3: 1396. https://doi.org/10.3390/s23031396

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop