6.1.2. Robotic Agent Architecture

From a software architecture point of view, each robot is organised in three layers. In Figure 11 the main components are roughly depicted. Each layer is responsible for different aspects grouped by abstraction levels so that the higher layer, the more abstract are the issues which the software components are devoted to.

**Figure 11.** Robotic Agent Architecture. The first layer includes the software components that represent systems or devices through which the agent can interact with the environment. The second layer includes models and algorithms to keep the models up to date. The third layer includes the task identification and selection algorithms. Components on the shadowed zone were developed during this work.

Going bottom-up in the layer stack, in the first layer the components are in charge of the interaction between the robotic agent and the environment. The *Motion Control* component is taken from the *MORSE* (*MORSE* physics simulator www.openrobots.org/morse/doc/stable/morse.html) repository and is responsible for controlling the motors. Besides, in this work, the component follows a way-point-based motion strategy. In the *Sensory Capabilities* component all sensory systems in charge of gathering environmental information are grouped. The most relevant information comes from the *Pose* and the *Laser scanner* sensors, also taken from the *MORSE* repository. From the *Pose* sensor it is possible to know the robot configuration *Xi*(*t*) = {*xi*(*t*), *yi*(*t*), *θi*(*t*)} at any time—implementing the

localisation capability—while the laser gives an array of distance measurements *z*(*t*) from which is possible to build the map of the close surroundings. Finally, the *Communication Capabilities* component is asked to manage every aspect related to communications receiving/sending information from/to (see incoming/outgoing arrows) other team members. In this work, and since only the distance and wall attenuation effects (discarding other sources of perturbation) are considered, the communication is simulated in a very simple manner directly applying the communication model introduced in Section 2.3.

The second layer represents the core of the system where the models and algorithms that support the highest level functionalities—namely related to the exploration purpose of the system—are allocated. On the one side, the *World Model* component is in charge of modelling all physic interaction between the robotic agents and its surroundings. By keeping several structures up-to-date (e.g., occupancy grid map, the position of the fleet members, assignment of the fleet members), it is also able to support foretelling services that would be required for the highest level algorithms. On the other side, *Mapping* and *Path Planning* components are also supported by the *World Model* component since it gives an access point to the mapping structures and the kinematic models as well. The *Mapping* component implements a standard occupancy grid approach [46] where the posterior of the map is calculated from a collection of separate problems of estimating *p*(*mk*|*z*(*t*), *Xi*(*t*)) for all grid cell *mi* and where each *mi* has attached to it one of the occupancy values *S* = { *f* , *o*, *u*} (previously defined in Section 2.1). The *Path Planning* component implements the wave-front propagation approach introduced in [43].

Finally, high level decisions as coordination are taken in the third layer when the task allocation scheme is executed by the *Task Assignment* component. In particular, the arrow between *Task Assignment* and *Communication Capabilities* components represents the exchange of current positions and task assignments from the agent to the fleet and vice-versa.
