3.2.1. Perception

The lowest level we define in perception is the sensor layer. Software modules belonging to this abstraction layer are drivers to decode the information captured by the sensors. We divide the sensors into two types: exteroceptive and proprioceptive. Exteroceptive sensors are those that provide environment perception, while proprioceptive ones are those that give the information to generate the odometry. In the next perception level, we define the preprocessing layer in which we place the necessary modules to extract the information from the different sensors, implementing data fusion algorithms if required. Then, we define the highest abstraction level in the perception area to place the localization or SLAM module. Depending on the localization system we are using, the output pose of this module can be expressed with respect to different frames (local, tracked object, map or global coordinates). This localization module is designed to receive their inputs in a ROS standard format making transparent the lower layers, so there is no difference if the data comes from the sensor layer or preprocessing one. In this level, there might also be modules for scene understanding or to detect obstacles and moving objects.
