**5. Conclusions**

In this paper, a novel terrain classification system for accurate recognition of different terrains is proposed. By using Kinect, color, infrared, and depth images are acquired simultaneously. The infrared and depth information are fused and used for obstacle detection. The local feature extraction of terrain images is done by the SURF-BRISK algorithm. A terrain classifier based on the BoW model and SVM is employed. Using the proposed method, different terrains can be classified quickly and precisely. Complex terrain recognition is achieved by the local image infilling method for terrain with obstacles and mixed terrain. According to the experimental results, the proposed method greatly improves the accuracy of complex terrain recognition and plays an important role in locomotion guidance of multilegged robots.

The theoretical contributions and novelty of this paper can be summarized as follows:

(1) Images with obstacles are infilled by surrounding terrain parts in order to improve the classification accuracy. Thus, the local features of images are magnified and the method can achieve satisfactory results.

(2) A super-pixel image infilling method for mixed terrain classification is presented. The average classification accuracy of the proposed method for mixed terrain is over 80%. The proposed method can make acquired data more believable and reliable for locomotion planning and control of intelligent robots.

(3) Multiple terrain labels can be given instead of a single label, which indicates that the presented method is very practical for complex terrains.

This paper focuses on a complex terrain classification system and a combination of terrain classification and obstacle detection to complete the planning of a robot path. In the future, we will improve the rapid transformation of a robot's gait based on terrain information and make the robot more intelligent.

**Author Contributions:** Y.Z. designed the algorithm. Y.Z., C.M., C.J., and Q.L. designed and carried out the experiments. Y.Z. and C.J. analyzed the experimental data and wrote the paper. Q.L. gave many meaningful suggestions about the structure of the paper.

**Funding:** This research was funded by the National Natural Science Foundation of China (No. 51605039), the Thirteenth Five-Year Plan Equipment Pre-research Field Fund (No. 61403120407), the China Postdoctoral Science Foundation (No. 2018T111005 and 2016M592728), Fundamental Research Funds for the Central Universities, CHD (No. 300102259308, 300102258203 and 300102259401).

**Conflicts of Interest:** The authors declare no conflict of interest.
