**1. Introduction**

Multilegged robot that origniated from reptile bionics has good walking stability and low energy consumption in its stationary state. It maintains good stability in complex environments owing to its redundant limb structure [1]. Compared with a wheeled robot, the multilegged robot can cross big obstacles and has many degrees of freedom. Its flexibility and adaptability on complex terrain allow the legged robot to have wide application. Researchers have designed different multilegged robots, such as mine-sweeping [2], volcano-detecting [3], underwater [4], strawberry-picking [5], and transfer robots, in addition to other prototypes. The autonomous mobile ability of multilegged robots is affected by how it perceives its surrounding environment. Multilegged robots mainly work in unstructured environments, so classifying various terrains, detecting obstacles, and localizing and recognizing complex terrain have become primary issues in the field.

For multilegged robots, environment perception is mainly related to accurate terrain identification and obstacle detection. The most normal way is to use image processing methods and classifiers. By extracting information from terrain images, such as spectra [6], color [7], texture [8,9], scale-invariant feature transform (SIFT) features [10], speeded up robust features (SURF) [11], and the DAISY descriptor [11], the terrain can be accurately identified. However, spectral-based methods concentrate on spatial frequencies of texture distributions and color-based methods have poor robustness and are easily affected by light and weather conditions. Among them, local features that are invariant in terms of scale, rotation, brightness, and contrast have been widely used in visual classification. Besides vision, legged robots are often equipped with other sensors, so information from multiple sensors for terrain recognition is also available. Kim [12] used the friction coefficient of different terrains to classify terrains using the Bayes classifier. Ojeda [13] proposed a terrain classification method based on an integration of information from multiple sensors (gyroscopes, accelerometers, encoders). Larson [14]

proposed a model based on robot inclination angle obtained from an odometer. Hoepflinger [15] used current values of joint electricity and force sensor data to recognize terrain categories. Jitpakdee [16] proposed a neural network model for terrain classification based on robot body acceleration and angular velocity of the inertial measurement unit (IMU). These kinds of information are quite different from visual information and so special methods are needed.

Most of the existing methods have good classification accuracy on single-type terrain, but few are suitable for mixed terrain, which is common in the natural environment. To solve this problem, Filitchkin [17] used a sliding window technique for heterogeneous terrain images. Liang [18] compiled an algorithm for complex terrain classification based on multisensor fusion. Ugur [19] proposed a learning method to predict the environment by consecutive distance- and shape-related feature extraction. However, most of these methods have poor robustness because they require high-resolution images. Mixed terrain has different features than single-type terrain and sometimes the edges of the terrain cannot be recognized clearly, which makes identification of mixed terrain difficult. In order to enhance the recognition rate of detecting mixed terrain and terrain with obstacles, a systematic classification method for complex terrain is proposed in this paper. The following aspects were studied: Terrain information collected by a Kinect 3D vision sensor. Herein, we established a fast and effective terrain classifier based on speeded up robust features and binary robust invariant scalable key points (SURF-BRISK) features and support vector machine (SVM). A segmentation method for complex terrain images based on super-pixels is proposed, which can effectively segment complex terrain images into single terrain images. An image infilling method for terrain with obstacles and mixed terrain is also proposed. The local features are magnified to help the recognition of different complex terrains. Experiments on classifying terrain with obstacles and mixed terrain are conducted. The proposed system is validated by the multilegged robot.

This paper is organized as follows: In Section 2, the hexapod robot and SURF-BRISK–based image infilling method are introduced. In Section 3, the experimental results are presented and analyzed. Section 4 summarizes and concludes the paper.

#### **2. Materials and Methods**
