*3.1. Complex Terrain*

In the experiments, the hexapod robot walked on six types of terrain without obstacles. Terrain images were collected by a Kinect camera installed on top of the robot. The inclination angle of the Kinect sensor is 40◦. Images of terrain with obstacles were collected at different times and in different weather and light conditions. Obstacles mainly included cartons, trash, trees, and so on. There were 50 images collected for each terrain type. The collected images of terrain with obstacles were processed by the ILI method. Then, all images before and after ILI processing were classified by the presented terrain classifier. The recognition results for the two sets are shown in Figure 12a. The recognition rate of terrain with obstacles before ILI processing was relatively low. Since obstacles seriously affect local features of the terrain, error exists in most cases and average recognition accuracy is less than 75%. On the contrary, after the image infilling process, recognition accuracy was improved to above 85%.

**Figure 12.** Recognition results: (**a**) terrain with obstacles; (**b**) mixed terrain.

Usually, mixed terrain appears at the intersection of different terrains. All mixed terrain images were collected at that moment. Specifically, 50 images were collected and processed by the SPI method for terrain recognition. The result is shown in Figure 12b. The classifier shows the labels of two terrain types for different subareas in the image. The average recognition accuracy reached 85% and the results show that the proposed method is effective in recognizing mixed terrain. At the same time, compared with a single label classifier, the SPI method has more practical significance for gait transition of the hexapod robot.

#### *3.2. Robot Platform Application*

When the robot walks on different terrains, different gaits have different effects on the robot's stability, performance, and energy consumption. The experiment showed that the gait can be changed based on the output of the terrain classifier. In the experiment, the hexapod robot walked for 30 s across three terrain types: asphalt, soil, and grass. The sampling period of the Kinect is 1 s. The gait pattern of the hexapod robot was set according to the output of the terrain classifier. The pseudocode of the gait transition algorithm is depicted in Table 4.

The value of *G* has a great influence on the smoothness and efficiency of motion on different terrains. The terrain classification results, including gait value, leg current from robot legs SRL1, and attitude angle, are shown in Figure 13. From 0–5 s, the terrain is supposed to be asphalt. Thus, the robot moves in tripod gait. From 5–6 s, the robot is in transition gait and ready to stride across mixed terrain consisting of asphalt and soil. The value of the terrain curve is nominal, showing that the terrain is complex, e.g., 1.3 means the terrain is changing from type 1 to type 3. From 6–21 s, the robot moves forward with its current gait. Then, from 22–23 s, it changes gait to get ready for another terrain. Finally, from 23–30 s, the terrain is grass and the robot continues to move in a wave gait. The experimental results show that the robot can walk stably on a single-terrain type and transform its gait successfully according to different terrains. At the initial moment, captured images from the Kinect are classified by system. The confidence rating of the classified asphalt is greater than 30%, so the terrain image is judged by the system to be asphalt pavement in a single terrain. Similarly, at 14 s and 22 s, the system outputs a single terrain label for soil and grass. At 5 s and 22 s, the terrain image corresponds to the uncertain category and the highest confidence is less than 30%. Therefore, the terrain is supposed to be mixed, and image infilling method is used until the classification result meets recognition reliability requirements. The system outputs the labels of two terrains and causes the robot to make the corresponding gait transitions.

**Table 4.** Gait transition algorithm.

**Initialize** *<sup>G</sup>*∈[0.5 1]; *SD*∈{*Dij* <sup>=</sup> 0, *<sup>j</sup>* <sup>=</sup> 1, 2, ... , 6; *<sup>i</sup>* <sup>=</sup> 1, 2, ... , 2*n*}; *<sup>B</sup>*∈{0, 1}; n <sup>=</sup> 0; *Ti*∈{1, 2, ... , 6} **Repeat:** (1) Collect terrain images: color, depth, and infrared; (2) Run the obstacle detection module and output *B* **if** *B* = 1 **then Run** image infilling processing I **Jump to** (2) **else if** *B* = 0 **then continue end** (3) Run the terrain classifier module and output *SD* **for** *<sup>i</sup>* <sup>=</sup> 1; *<sup>i</sup>* <sup>≤</sup> <sup>2</sup>*n*; *<sup>i</sup>*++ **if** max {*Dij*, (j = 1, 2, ... , 6)} < 0.3 **then** n++ **Run** image segmentation processing **Run** image infilling processing II **Jump to** repeat (3); **Else if** output the subscript *j* of max {*Dij*, (j = 1, 2, ... , 6)}; *Ti* = *j* **end end** (4) Output classification results and gait *G* **for** *i* = 1; *i* ≤ 2*n–1*; *i*++ **if** *Ti* = 1 or 2 **then** *G* = 0.5 **else if** *Ti* = 3 or 4 **then** *G* = 0.75 **else if** *Ti* = 5 or 6 **then** *G* = 0.83 **end end** *T* = *T*1*,T*2*,T*3*,* ... *, T*2*n–1* (5) Run the robot **Until:** The robot is switched off.

Note: *G* is the walking gait; typically, 0.5 for tripod gait, 0.75 for quadruped gait, and 0.83 for wave gait. *SD* is the confidence score; *j* refers to terrain type: 1 for asphalt, 2 for tile, 3 for soil, 4 for gravel, 5 for sand, 6 for grass; *i* is the serial number of images; *B* refers to the result of obstacle detection: *B* = 1 means there is an obstacle, *B* = 0 means no obstacle. *Ti* is the output label of the terrain classifier. Image infilling processing I represents the ILI module, and image infilling processing II represents the SPI module.

**Figure 13.** Gait transition.
