**8. Empirical Results and Discussion**

This section describes some manipulation problem solved using the proposed framework, and the implementation issues. The mobile robot considered is *TIAGo*. It has 7 degrees of freedom arm, equipped with a gripper, mounted on a mobile platform through a lift torso.

The executive simulated result of the manipulation problem represented in Figure 4, called *Problem-1*, is shown in Figure 6. For the domain of the problem, a number of actions is considered for the robot being *transit, transfer, open,* and *push* along with some actions for an operator that are *humanTransfer* and *humanOpen*. Actions are selected according to the planning mechanism in terms of symbolic and geometric reasoning. We assume that the values of the uncertainty information are provided in run-time in simulation. Therefore, the executive plan is provided below:

*Executive Plan*: { *Transit-B, CheckContainer-Box1-Open (False), HumanOpen-Box1, Transfer-B-Box1, Transit-A, SenseColor-A-Red (True), Transfer-A-RedTray* }

The states represented in the figure are classified as follows:


**Figure 6.** The simulation results of the executive plan performed by the *TIAGo* robot: (**a**) is the initial robot and environment state, (**b**) is the state after *Transit* action towards cylinder B, (**c**) is the result of the sensing action *CheckContainer-Box1-Open*, (**d**) is the state after applying *HumanOpen* action, (**e**) is the state after the robot executes the *Transfer* action for cylinder B, (**f**) is the state when the robot transits to cylinder A, (**g**) is the state resulting from the sensing action *SenseColor-A-Red*, and (**h**) is the state when the robot place cylinder A on the associated tray.

In addition, the proposal has been evaluated for other cluttered problems where the robot needs to sort objects according to the colors. Regarding the action domain, Robot actions are *transit* and *transfer*, and the action template *humanTransfer* is considered for an operator. The problem represented in Figure 7, called *Problem-2*, shows the initial and goal states of manipulation where the green and red objects must be located on the green and red regions respectively. The red object is not initially located on the table. The pink region is considered on the robot workspace where an operator can transfer objects. The planning uncertainties are the color of the green object which could be actually green or red and the location of the red object which could be on the robot table or in the human workspace. Therefore, the *humanTransfer* action is applied to transfer the object to the robot workspace as the robot is not allowed to move to the human workspace. The final executable plan would be to transfer the green object to the target placement region by the robot. It then looks for the red object and figures out the object is not located on the table and asks an operator to transfer the object. The *humanTransfer* action is selected in the conditional plan, so the requested object is transferred to the robot workspace. The operator updates the robot knowledge through the robot system terminal. The robot is aware that the human action has been successfully performed, and afterwards it travels to grasp the object. Eventually, the robot transfers the object to the target region.

The proposed approach has been tested for similar problems by increasing the number of objects and varying color and/or location uncertainties. The problems performance are represented in Table 1 in terms of conditional and executive plan length, and moreover planning time. *Problems-3* includes a cluttered problem where there are nine objects and three of them need to be sorted. Similar uncertainty of *Problem-2* is considered regarding the color and location of objects. *Problem-4* is the one

where 12 objects exist and four of them must be sorted. In this case, the uncertainty information like the objects color and locations are considered for more objects.

**Figure 7.** The manipulation example where green and red objects must be placed in the green and red regions. (**a**) shows the initial state of the problem. (**b**) shows the final state of the problem. The red object is not initially located in the robot workspace. The pink region is the place where human can transfer objects to the robot workspace. The solution can be visualized here: https://sir.upc.es/projects/ ontologies/GreenRedHuman.mp4. The solution for the case that the red object is initially located on the table is visualized here: https://sir.upc.es/projects/ontologies/TiagoRedGreenRob.mp4.


**Table 1.** The conditional plan and executive plan length in terms of number of sensing and executive actions and planning time in seconds for the evaluated problems.

Concerning the implementation framework, four components are considered: task planning, relaxed geometric reasoning, motion planning, and executive module. Task planning is developed using a modified version of the *Contingent-FF* planner coded in C++. All the action templates are described using *PDDL* by considering *ADL (Action Description Language*, ref. [28]) enabling us to define operators in a more compact way, using quantifiers and conditional effects. There is not any pre-processing step to compute geometric details of actions and they are computed and assigned during the manipulation planning process.

We use *The Kautham Project* [29], a C++-based open-source tool for motion planning that enables planning under geometric and kinodynamic constraints for relaxed geometric reasoning and motion planning. It uses the *Open Motion Planning Library (OMPL)* [30] as a core set of sampling-based planning algorithms. In this work, the *RRT-Connect* [31] motion planner is used for motion planning. This planner is one of the most efficient motion planners, but it does not guarantee optimal motions. *The Kautham Project* involves different collision checking modules to detect robot-object and object-object collisions, and features a placement sampling mechanism to find feasible object poses in the workspace. Relaxed geometric reasoning uses these modules to find feasible sample geometric instances for symbolic actions. The executive module uses a sensing module which uses the 3D camera mounted inside the *TIAGo* robot, and also some components provided by *PAL Robotics* to send a motion

path to the robot. The communication between task, relaxed geometric reasoning, motion planning, and executive modules is done via *Robotic Operating System (ROS)* [32].

### **9. Conclusions**

This paper has proposed a contingent-based task and motion planning approach able to cope with high-dimension mobile manipulation problems in the presence of high-level uncertainty and human interactions (referred to the sharing of knowledge and to collaborative actions which are out of the robot capabilities). For this purpose, the basic *Contingent-FF* planner has been modified to include robot action reasoning, human–robot collaboration, and state observation. A set of geometric reasoning processes has been offered to the planning process to capture the task constraints imposed in the robot environment and to update belief state while task planning is done. Moreover, some modules linked with the human knowledge along with the perception system, have been also designed to observe the binary outcomes of actions. It is worth noting that the proposed approach results in a tree-shaped conditional plan which is geometrically feasible regardless of the values of sensing actions.

To evaluate the proposed approach, several manipulation tasks have been executed in simulation and real environments to show the way of tackling human–robot interactions, and identifying and handling both geometric constraints and high-level uncertainty. Problems performance has been reported in terms of the length of the manipulation plan and planning time, considering an increasing number of objects. In all the cases, the robot in collaboration with the human operator has been able to solve the tasks despite the uncertainty and the constraints.

Future work will concentrate on manipulation tasks also subject to low-level geometric uncertainty, its effects in sensing and how it is transferred to task planning.

**Author Contributions:** A.A. and J.R. conceived the theoretical contributions, A.A. wrote the paper and implemented the whole framework, being assisted by M.D. for the perception part. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was partially supported by the Spanish Government through the project DPI2016-80077-R. Mohammed Diab is supported by the Spanish Government through the grant 2017.

**Conflicts of Interest:** The authors declare no conflict of interest.
