1. Introduction
The high operating voltage of the substation creates electromagnetic radiation and other unfavourable conditions, such that personnel safety is not guaranteed and accidents are not uncommon. Staff can do the work from a remote location. The robots can also eliminate many missed and erroneous inspections [
1].
Substation robots belong to the class of modern intelligent mobile robots. Their main functions are to familiarize themselves with their environment and find an appropriate path to the goal point to complete the entire mission. Autonomous navigation is the basis for the realization of robot intelligence, and it is also the key to realizing the robot’s autonomous walking to reach the destination and complete specific tasks. During autonomous navigation, the robot not only ensures its own safety but also does not damage the surrounding environment. A complete navigation system has three aspects of autonomous positioning, map construction, and planning control [
2]; these three aspects correspond to the above three problems. When constructing an environmental map, the robot must recognize its position information, the positioning and mapping of the robot are studied and analyzed as one aspect, and the navigation path planning is analyzed as another aspect, aiming at the safety of the robot; stopping and avoiding obstacles in the navigation process are analyzed as a third aspect. The existing navigation methods include trajectory navigation, inertial navigation, Global Position System (GPS) navigation, visual navigation and laser navigation.
Trajectory navigation lays a guideline on the ground that the robot will pass; the robot body is equipped with a guideline recognition device, and the robot can walk along the guideline without deviation through the control algorithm. The advantage is that it is stable and reliable, but its disadvantages are also obvious: the environment needs to be modified, the later maintenance cost is high, and the movement of the robot is also restricted by the guideline [
3,
4,
5].
Inertial navigation is a typical relative positioning navigation method. The robot’s posture information is obtained by integrating the sensor data of the Inertial Measurement Unit (IMU), including the magnetometer, accelerometer, and gyroscope. The track estimation in the mileage measurement system also belongs to inertial navigation, and the track estimation is to obtain the robot’s posture information through the motor encoder and robot kinematics. Compared with the inertial measurement unit, the track calculation is more accurate. Inertial navigation can have a good effect on short-distance positioning and navigation, while longer distances will cause serious accumulated errors. Inertial navigation alone cannot achieve accurate navigation and positioning effects [
6].
GPS navigation is an absolute positioning navigation method, such that there is no cumulative error. Because this kind of navigation method obtains the posture by receiving the GPS signal, the strength of the signal and the interference to the signal will have a greater impact on the method, and it is even more difficult to receive in the indoor environment, but this method is relatively stable for the outdoor environment. It has been applied to mobile phones, cars, and map navigation. It is also used as a positioning data application in some outdoor robots. It is integrated with other data to improve positioning and navigation accuracy [
7].
Visual navigation [
8] is the hottest research direction in recent years; the main hardware includes monocular, binocular and depth cameras. This technology is also called visual Simultaneous Localization and Mapping (SLAM) [
9] technology. Due to the plentiful feature information contained in the environmental image obtained by the camera, which is similar to human eyes, advanced algorithms can construct a three-dimensional map of the environment and determine the pose of the robot’s environment. Compared with LIDAR, the camera is inexpensive and reduces hardware costs. However, this method is still in the early stage of research and requires high computing capabilities; most of them are based on experiments and have not yet been truly commercialized. Vision is based on light, and all of the information about the characteristics of the night environment will be lost, such that all-weather autonomous navigation cannot be achieved by vision alone.
Laser navigation has been the most popular navigation method in recent years. Because laser navigation can use SLAM technology to build maps without modifying the environment, which greatly improves the versatility of the robot, and it can also be applied to indoor and outdoor environments. Nowadays, most robots construct navigation systems by combining multiple navigation methods. Laser navigation is usually combined with inertial navigation to realize the autonomous navigation of robots, but the price of LIDAR is higher, which greatly increases the cost of the robot [
10].
In the method, predecessors did a lot of research. Guastella et al. concluded that perception plays a vital role in complex environments because it provides the necessary information to make vehicles aware of their states and surroundings, all of which have the unstructured nature of the environment under consideration. However, these methods are still in the embryonic stage; in indoor substations in this structured environment, mature use is still quite far away [
11]. Seçkin et al. proposed a natural navigation method to track paths from 2D maps, which aims to utilize the memory of previous paths and then follow the same path, with the problem that mobile robots can reach a target at a lower error rate without accumulating errors, even in a multi-bend system, but a sequence of symmetrical or repetitive objects leads to the robot detecting similar patterns and making wrong decisions [
12]. Yan et al. proposed a mapless navigation method based on two-dimensional LIDAR in complex and unknown environments, which does not depend on global planning or an environmental map. However, when it comes to moving obstacles, the optimal sub-target will be changed due to the movement of obstacles, which will lead to the problem of inaccessible targets [
13].
Laser SLAM is currently the most stable and mainstream positioning and navigation method. The main mapping methods for laser SLAM include Gmapping, HectorSLAM, and KartoSLAM. Gmapping is currently the most widely used 2D SLAM method; it can construct indoor maps in real time, and has good mapping effects in long corridors and low-feature scenes. The disadvantage is that it depends on odometry, it is not applicable to unmanned aerial vehicles or uneven ground areas, and there is no loop. HectorSLAM has relatively high requirements for sensors, it does not use odometry, and it can adapt to the situation of uneven ground areas. The disadvantage is that the update frequency of the LIDAR is required to be high, and the measurement noise is small. KartoSLAM is a graph-based SLAM approach that uses the mean value of the graph to represent the map; each node represents the sensor measurement dataset and a position point of the robot trajectory. In this paper [
14], the algorithms were compared in the size simulation environment, the actual environment, and CPU consumption, and it was found that KartoSLAM and Gmapping have more advantages. Due to the small amount of calculation and the effective use of odometer information, this paper chooses the Gmapping mapping method.
This article focuses on the chassis of the robot, including the selection of the sports chassis model and the realization of autonomous navigation functions. Firstly, the EKF is used to realize the data fusion of odometry and inertial navigation, and the Gmapping method is used to construct the environment map. Secondly, the A* algorithm is used to plan the global path, and then the DWA is used to plan the local path to achieve dynamic obstacle avoidance. At the same time, the safety touch sensor is used to realize the robot’s stop function. Finally, the researched chassis navigation system is applied to the functional indoor substation switch cabinet operation robot, and it is verified in the process of detection and evaluation.
3. Robot Multi-Sensor Data Fusion Slam
The autonomous navigation system uses multi-sensor fusion SLAM technology, robot path planning technology, and robot automatic obstacle stop and obstacle avoidance technology. The chassis includes a mileage measurement unit formed by a motor encoder and an inertial navigation unit formed by the heading sensor; the data obtained by the two navigation and positioning methods [
16] are used to obtain more accurate relative position coordinates. Gmapping is used to construct an environment map, in the case of a known map, according to laser point cloud data and the robot’s relative positioning method; Adaptive Monte Carlo Localization (AMCL) realizes the robot’s relocation, which reduces the accumulated positioning error in the long-term movement of the robot.
3.1. Data Fusion Positioning of Mileage Measurement and Inertial Navigation Based on EKF
The combination of multiple navigation methods is most useful to increase the precision of the robot’s navigation [
17,
18,
19]. The fusion of multiple sensor data compensates for the sensors’ shortcomings to enhance the steadiness and robustness of the navigation and positioning.
Because the wheel odometer will have accumulated errors, sudden acceleration or deceleration and wheel slippage during sharp turns will also cause errors. In order to prevent this error from interfering with the mapping, an IMU (Inertial Measurement Unit) sensor is added. The sensor data of the IMU is stable, has high precision, and has a wide range of applications. In the ROS system, the extended Kalman filter can be used to fuse the two data, reduce the cumulative error of the odometer, and provide more accurate pose information for the mobile robot.
IMU and odometer data fusion can use EKF. EKF will first predict the system state and then update the observation. The system state transition and observation model can be differentiable nonlinear functions:
where
is the nonlinear system model,
is the measurement model,
is system noise,
is measurement noise, and
all are zero-mean Gaussian noise [
20], the covariance matrix is
Taking as input the increment of the odometer from time
to
, then
where
is the odometer distance increment and
is odometer yaw increment.
The corresponding Jacobian matrix is
The prediction process is
The update process is
where
is the last state quantity,
is the state transition matrix at the corresponding moment,
is the control input vector, and
is the observation state matrix that the actual state space is mapped to the observation space. In order to build a robot model in the Gazebo simulation environment, the odometer data of the robot are set to be “odom”, which includes the current position and posture of the robot and the speed information at that time. The inertial navigation data is “imu”, and the data information after fusion is named “odom_combined”; this node is the data fusion node of the EKF algorithm, which subscribes to the information of “odom” and “imu”.
Figure 5 shows that the robot is controlled to complete a square movement with and without EKF in the Gazebo robot simulation environment. Through the path of the simulated robot, it can be found that its posture when fused with EKF has less drift, making its positioning more accurate.
3.2. The Method and Application of the Robot’s Mainstream SLAM Mapping (Gmapping)
The environment information is acquired by 2D LIDAR, which is single-plane laser point cloud data, such that its feature information is less than that of a visual sensor, the feature map [
21] representation is excluded firstly. In order to ensure stable and accurate navigation and prevent the loss of position information, the topological map [
22] representation is excluded. After the above comprehensive consideration, the grid map [
23] representation is used to build the robot environment map.
The SLAM is the process of sizing up the robot’s location and environment information in the unknown environment from the current environment measurement, previous time position information, and control input.
The filter is indispensable to robot SLAM. The most representative is the SLAM algorithm based on a Particle Filter (PF), in which Gmapping is ameliorated based on a Rao-Blackwellized Particle Filter (RBPF) [
24]. This method has two problems.
There is a high demand for robot posture accuracy when constructing the map. Only the posture information obtained from the motion model can construct the map, but the error will be large, which causes a mismatch between the real environment and the built environment map. The proposed distribution is too scattered, which is quite different from the real distribution; to increase the number of particles near the actual distribution requires more particles overall, which will increase the computational cost.
Frequent resampling leads to particle dissipation. Each particle is a posture state of the robot, which contains all of the posture and map information; frequent resampling causes the particles to gather at one point, which decreases diversity quickly, and even the correct particles tend to be discarded, causing the loss of location information.
Gmapping proposes an improved proposal distribution for the number of particles in RBPF and selective resampling for the frequent resampling of RBPF to reduce the amount of resampling [
25]. Gmapping is used to build the robot simulation environment map, and to start the robot simulation environment built by ROS [
26].
Figure 6 shows the map obtained.
3.3. AMCL Relocation Based on LIDAR Data
With data fusion and the positioning of the robot’s internal sensors complete, environment information is observed by LIDAR, and a more accurate map is constructed. With a known map, it is difficult to maintain positioning accuracy for a long time with only the relative positioning mode of the robot’s internal sensors, such that LIDAR is used to obtain the environment information and then match with the known map; the position and posture information are obtained according to the external environment in order to correct the accumulated error of relative positioning, so as to ensure the accuracy of positioning. Only when autonomous positioning is realized can path planning and navigation be better realized. This process is called the robot’s relocation process; this process also solves the problems of robot wheels slipping, the robot drifting, and being moved in a small range by a human. AMCL is used to realize this process.
The small black line in
Figure 7 represents the particle swarm estimated by the robot’s position, the large white arc represents the mean value of the particle swarm, the small white circle represents the robot’s true position relative to the map, and the grey robot mark in the figure finally coincides with the real position through this process to realize autonomous positioning.
Figure 4 shows the AMCL process.
Firstly, as shown in
Figure 7a, each particle represents the position of the robot, and each has the same confidence. Then, bring every particle into the motion equation of the robot, make all particles move with the robot, and eliminate some particles that have moved out of the map, as shown in
Figure 7b. Next, compare the remaining particles with the observed map environment information, change the confidence of each particle, improve the confidence of highly matched particles, and redistribute particles according to confidence to increase the number of accurate particles, as shown in
Figure 7c. Finally, by repeatedly going through the above two steps, all of the particles are gathered around the robot, as shown in
Figure 7d, thus completing the global localization process.
If a wheel drifts or the robot is moved by a human, it is separated from its real position in the environment, as shown in
Figure 7e, and this is corrected by AMCL repositioning. The process is described as follows: First of all, the sudden decrease of the average particle score indicates that the robot has deviated from its intended real position; at this time, some particles are scattered around the robot map environment, as shown in
Figure 7f. Afterwards, through the scattered particles, the map information is rematched for each particle, its confidence is calculated, and the number of particles with high matching degrees is increased, as shown in
Figure 7g. Ultimately, through the above iterative process, the robot can obtain its position information again in order to realize the robot relocation, as shown in
Figure 7h.
According to the analysis, AMCL needs the robot’s motion information and LIDAR data, which also needs map information, Transform Frame (TF) conversion, and the mean and variance of the initialization particle filter for robot positioning.
The method is verified in the simulation environment and displayed by the RViz visualization tool of ROS.
Figure 8a shows the initial state—the particles will gather slowly as the robot moves—and
Figure 8b shows the final result. Each small arrow represents a particle, and all of the particles are updated iteratively such that that they gather around the robot. By the above methods, the robot cannot obtain the real position quickly, such that an initial position of AMCL is usually given, which doesn’t need to be very accurate; AMCL enables the robot to quickly complete the initialization in order to meet real-time requirements.
4. Research on the Path Planning of a Robot Autonomous Navigation System
Path planning analysis is performed from two aspects: global and local path planning. Common global path planning algorithms include Dijkstra, A* [
27], the Probabilistic Road Map (PRM), and Rapidly exploring Random Tree (RRT). The Dijkstra algorithm is a classic breadth-first search algorithm. That is to say, the algorithm starts from the initial point and searches the entire free space layer by layer until it reaches the target point; this algorithm can obtain the shortest distance between two points, but it will greatly increase the time and data cost. The A* algorithm adds a heuristic function on the basis of Dijkstra, thereby improving the efficiency of Dijkstra’s algorithm. The PRM algorithm first obtains sampling points in the environment and connects them to form a probabilistic road map. When setting the start and final points, it will find the sampling points closest to the start and endpoints. By contrast, we find it is relatively simple to search for the global path through the A* or the Dijkstra algorithm, but the route is not the shortest for the Dijkstra algorithm. Furthermore, it is also difficult to sample in narrow and long areas. The RRT algorithm takes the starting point as the root node and randomly samples the posture, and the tree is repeatedly expanded according to the reachable condition until the destination is added to the branch of the tree. It is strange that although this method directly considers the control, the route is not the shortest. In summary, A* was selected as the familiar global path planning algorithm.
The Dijkstra algorithm and the A* algorithm are provided in the navigation function package in ROS.
Table 1 is a comparison of the running time of Dijkstra’s algorithm and the A* algorithm.
It can be seen from
Table 1 that the time taken by Dijkstra’s algorithm is always longer than that of the A* algorithm, whether there is no obstacle or there is an obstacle.
4.1. Global Path Planning for Robots Based on A*
Global path planning can be described as the planning of a complete path from the starting spot to a goal spot when the environment map is completely known. The A* is depicted as follows.
Assume that the starting spot of the robot is O, the goal spot is G, and the current node is N. A* uses a cost evaluation function:
where
n is the search node;
is the cost of the robot to move from O through
n to G;
is the actual cost to move from O to N, which is called the dissipation function; and
is the estimated cost to move from N to G, which is called the heuristic function.
A* improves the Dijkstra algorithm [
28] by adding a heuristic function;
Figure 9 shows the influence of
on A*. When
, the heuristic function does not work, and A* is the Dijkstra algorithm; the larger
, the more restricted its search space, and the harder it is to ensure an optimal search. The
is usually determined by the Euclidean distance and the Manhattan distance from the starting spot to the goal spot. The specific flow chart of A* is shown in
Figure 10.
Figure 11 shows the process in the form of a graph, where the upper-left, lower-left, and lower-right corners of each square represent
,
and
, respectively. Among them, yellow represents the starting point, gray represents the ending point, red represents the optimal path, and black represents the obstacle.
Based on the above analysis,
Figure 12 shows the simulation results of the A* algorithm, where the black square represents the wall, dark green is the starting spot, red is the goal spot, light blue-green represents the search space traversed during the path planning, and the yellow solid line represents the optimal path planned from the starting spot to the goal spot.
4.2. Robot’s Local Path Planning Based on DWA
In local path planning, the robot plans the best local available path and issues speed commands to control its movement according to the real dynamic environment information from LIDAR. Universal local path planning algorithms include Artificial Potential Field (APF) [
29], Vector Field Histogram (VFH) [
30], and DWA. According to the robot’s indoor application environment and the environmental representation in the grid map, DWA was chosen; according to the characteristics of the dynamic environment, DWA can realize the function of real-time obstacle avoidance.
DWA forecasts the trajectory based on the multi-group velocity sampling of the robot motion state according to the model of the robot chassis motion; it chooses the best-predicted trajectory so as to avoid the robot crashing, and then the speed command of the robot is issued to control it along the best local path.
5. Research on Automatic Stopping and Avoiding Obstacles
During the development of the robot navigation system, it is equally important to ensure the safety of the robot body and the surrounding environment. The local path planning based on the dynamic window method (DWA) is carried out under the dynamic environment map. The real-time information obtained by the laser can detect whether there are obstacles in front of the robot and the distance information of the obstacles [
31], and then realize the autonomous navigation and obstacle avoidance function of the robot. However, because the data acquired by the 2D LIDAR comes from a single plane, it is impossible to test shorter roadblocks. Therefore, a safety edge sensor is used to detect shorter roadblocks and stop the robot’s movement based on real-time feedback.
The safety edge, also known as a bumper strip, is a pressure-sensitive switch that appears as a soft, bendable ribbon. It can be fixed around the moving vehicle body to prevent collision. Generally, it is normally open logic. When triggered, these safety edges are compressed and deformed due to their softness. Then, a closed IO signal is generated through sensing and transmitted to the core controller, which feed back the information that the moving car body has collided with the object. The significance of installing the safety edge is that because the mobile robot usually uses a two-dimensional laser to perceive the scene, it can only identify and detect objects in the laser plane, but there may be three-dimensional obstacles on the user’s site. It is impossible to perceive the protruding part at the bottom of the object. At this time, the protruding lower part will touch the anti-collision bar, and the robot will stop immediately after sensing the triggering of the anti-collision bar. Therefore, the anti-collision strip is generally installed on the plane that cannot be scanned by the laser; it is generally installed relatively low, and the anti-collision strip will be installed before and after the robot.
For the robot itself, the control of the safety edge sensor is set as the highest priority, followed by the keyboard control, and finally the robot autonomous navigation process control. The robot’s safety has a preference based on this priority, and when the safety edge sensor receives a collision pressure signal, the robot will stop immediately.
6. Practical Application Platform of the Autonomous Navigation System
By analyzing the operating mechanism of the robot ROS system [
32], a complete navigation software system for the robot chassis was built, and it was all concentrated in a ROS workspace; multiple processes took place simultaneously to achieve the autonomic navigation function of the robot chassis and completed the chassis software system design. The researched robot navigation system was applied to actual robots, the new chassis physical parameters were recalibrated, and we analyzed and verified the system through practical application.
Figure 13 shows the robot body; the main function is to assist substation workers in the detection of a distribution switch cabinet. The robot has three main parts, including a Mecanum wheel chassis, a three-axis robot arm platform movement module, and a spindle operation pan-tilt module.
We used the power distribution switchgear robot to achieve positioning and mapping; we also used the Gmapping method to build a map, and used the map_server of ROS to save the map file. The saved map file provides a reference for the later navigation process. In the same indoor environment, the map created by the mobile robot using the Gmapping algorithm through the single odometer method is shown in
Figure 14a, and the map created by integrating the odometer and IMU data is shown in
Figure 14b. All of the maps have a resolution of 5 cm. Through the comparison of the two maps, it can be seen that in the initial period of time, the maps created by the two are basically not much different. With the accumulation of time and the increase of the moving distance of the mobile robot, the contour of the single-wheel odometer method gradually appears tilted and deformed when the map is constructed, and the constructed map has a certain deviation compared with the actual indoor environment, as shown in the red box in
Figure 14a. The map is tilted by 2°. However, the map constructed by combining the odometer and IMU data has a clear outline and can accurately estimate the pose information of the mobile robot in the indoor environment, and the constructed map can accurately and truly express the indoor environment. Through comparative analysis, it can be seen that the map results created by the latter are more accurate.
The widths of the two sides and three positions in the middle of the corridor were selected for measurement, and the measured values of the grid map were used for comparison. The experimental data are shown in
Table 2 and
Table 3. By comparing
Table 2 and
Table 3, it can be seen that the error of the map accuracy in the early stage is relatively small. With the increase of the grid map, the error of the map created by a single odometer is larger than that of the odometer and IMU. The composition effect is better after the fusion of the odometer and IMU.
We need to achieve autonomous navigation in the environment, without rebuilding the map. The constructed indoor substation environment map is shown in
Figure 15.
For the analysis of the autonomous navigation system of the robot, we mainly analyzed the positioning accuracy of the robot reaching the target point during the cyclic navigation process. As shown in
Figure 16, we set the starting point and the target point for the robot, and used tape to mark the initial pose state of the robot at the starting point and endpoint of the robot. Then, we set the robot to continuously cycle between two points, recording the position and attitude of each navigation to the target point.
According to the analysis of many experiments, after the mobile robot reached the target point, we used a tape measure to measure the actual position of the mobile robot, compared it with the position data in rviz, and took the maximum error, minimum error and average error in the 10 sets of data. The accuracy is shown in
Table 4. We can see that the accuracy of the autonomous navigation of mobile robots meets the experimental requirements.
In order to ensure the safety of the robot, we analyzed its obstacle avoidance and stop function, as shown in
Figure 17. Without affecting the LIDAR data, the robot autonomously bypasses the obstacle to avoid obstacles when the LIDAR scans an obstacle. The safety edge reads the collision signal and stops the robot from moving again when an obstacle that cannot be scanned by the radar is placed on the robot’s navigation path and a collision occurs. After we manually remove the obstacle, the robot resumes navigation.
Figure 18 shows the actual operation process of the robot, whereas
Figure 18a shows the initial position and charging position; the initial and charging positions can be different, according to the tasks published by the server, and the robot autonomously navigates to the target point of the task assignment.
Figure 18b shows the robot movement process.
Figure 18c shows the robot reaching the goal spot. Through analysis, we know that the autonomous navigation system can meet the navigation and positioning requirements of the distribution switch cabinet operating robot. By using this platform, the practical value of the developed robot navigation system was verified. Finally, we tested and certified the complete functions and performance indicators of the robot through a third-party testing agency which meets the project indicators and requirements.