RoNAP mainly employs Pygame packages to complete the creation of various visual interfaces. In this section, its abundant functions and prominent characteristics are introduced in detail.
2.2. Features of RoNAP
In order to clearly observe the movement pose of the robot, RoNAP provides a clear first-person perspective interface, as shown in
Figure 2. The field of view is focused on the circular area around the robot, and the radius is the maximum detection distance of the laser. In this interface, the mobile robot remains stationary, meaning that the emission angle of each laser does not change, which is convenient for calculating the distance between obstacles and the robot. Meanwhile, a rotating scene
is provided by changing the robot’s forward angle
z to ensure that the orientation of the robot relative to the scene conforms to the actual change, where
This platform offers several sensors to obtain scene information. The laser is set to utilize
m beams to detect obstacles in the direction of movement. Whether the laser reaches the obstacle is determined according to the color of the coordinate position on the picture, as described in
Figure 3. Each distance is denoted as
,
, and the distance detected by the lasers is indicated by rays with different colors, as follows:
The detection coverage is , which is capable of fully perceiving all obstacles ahead of the robot.
Furthermore, a location sensor is incorporated in the design to obtain the current position
and destination position
. The distance to the destination is computed by
This distance is displayed on the
Start interface to indicate whether the robot is moving towards the destination position. The sign indicating whether the final destination position has been reached is determined by
where
is the acceptable distance error; at this point, “TaskComplete” is printed to remind the user. After obtaining the coordinates of the start position and destination position, the angle between the line of the two points and the x-axis is computed by
The direction to the destination is shown by the red arrow in
Figure 2, which rotates with the scene based on
, where
and % denotes the remainder operation. Furthermore, a gyroscope is equipped to measure the direction of the robot’s movement
z. The combination of all the above-mentioned information is enough to determine the pose of the robot.
The scene of this platform is similar to a maze built by L-shaped obstacles. In order to conveniently change the distribution of obstacles in the maze, a black scene is divided into several square grids. As described in
Figure 4, initially, the current position of a red agent
is set on the top first one. A black adjacent grid is randomly selected as the next position
, which turns into a white one after the agent passes by, as shown in
Figure 5. Obstacles are placed on four sides of a grid, and the obstacle on the side passed by the red agent is removed, which ensures that there is a channel between any two positions in the scene. If there is no black grid around the agent, it returns to the previous grid and searches for other black grids. The red agent continues to move in the scene until there is no black grid. The grid selected by the red agent is random each time the scene is generated, resulting in different placement of obstacles.
Furthermore, the robot’s movement
is divided into several discrete actions, i.e., turn left, go forward, and turn right. Compared with the continuous action space, the operation of discrete actions is simpler, and their combination allows the robot to move to any possible position in the scene. The robot’s moving speed
and rotating speed
are set through the
Configurations button, which allows the user to change the robot’s pose
at each step by
A user name is set to distinguish different users and facilitate subsequent data search. When this information is set, the initial settings are replaced. A warning alarm “CollisionWarning” appears in case of a collision with obstacles, then the robot returns to the previous position.
The
Collect Data function is utilized to record human navigation strategies. The robot’s movement is controlled through the keyboard in the scene. The arrow keys on the keyboard correspond to the three discrete actions shown below.
keyboard | ← | ↑ | → |
action | turn left | go ahead | turn right |
To prevent the robot from continuing to move after pressing the key, an anti-mispress function is set such that only after pressing the
down key can the other arrow keys work. Relevant information is saved in a
file in the form shown below.
| | | | |
| 135.73 | 67.84 | 10.90 | AHEAD |
Here,
is an
m-dimensional vector representing the distance detected by the laser. As input information, this is sufficient to distinguish the pose of robot in different positions. Moreover, the position of the robot changes randomly after each action through
where
,
are random numbers satisfying a uniform distribution between 0 and 1,
is a random number satisfying a Gaussian distribution
and
S is the size of window. Random change offers the state information of the robot the same probability of being collected, ensuring the integrity and variance of the data. Meanwhile, the random position reduces the continuity of data in both time and space.
RoNAP is user-friendly because it supports a simple form of algorithm access. It is suitable for accessing an input–output model which is established separately on the basis of an algorithm. The established model decides the robot’s action according to the observed scene information. This platform changes the position of the robot in the scene based on the executed action, and the new state information is saved and passed to the model as the next input. Looping continues until the end of the task. In addition, there is no need for complicated operations in the process of algorithm testing; the only step is to simply click the corresponding button according to the prompts.
The above functions are only the initial settings of RoNAP; the outstanding feature of the platform is that its parameters can be changed according to the user’s needs. The functions of the platform are written in different modules using Python, meaning that users who know programming can easily find the code corresponding to each function. Taking the action of robots as an example, it can be set to a combination of angular velocity and linear velocity. The only change to the code that is needed is the way in which the robot’s coordinates are calculated after performing an action. Other parameters can be modified by modifying the code as well, such as the number of lasers, the maximum detection distance, the size of scene, etc.