*6.2. Feature Extraction from Time Series by Wavelet Decomposition and Classification by SVM and KNN*

In the second approach we simplify the tactile video classification such that a set of features from videos can be directly fed into conventional classifiers. In the previous approach, we took benefit from a CNN with a 3D kernel to learn features of tactile data. Here we employ a 16 directional contourlet transformation [20] for feature extraction from each tactile imprint. As such, a feature vector of size 16 is computed as the standard deviation of each directional subband. At this point, the normal vectors to probing locations (kinesthetic cues) are added to the obtained feature vector to create a 1 × 19 feature vector for each probing point from the contour. The variation of each of these 19 features by moving the tactile sensor along a contour creates 19 time-series of length 25. Consequently, we exploit a 3-level wavelet decomposition for each of the 19 time-series using the Daubechies 2 wavelet, to extract features characterizing how the 19 tactile features vary when the tactile sensor moves along the object contour. Then the root mean squared (rms) value, standard deviation, and skewness of the wavelet coefficients for each level, as well as those of the sequence itself, are concatenated to produce a final feature vector. To avoid the negative effect of high dimensional data on the performance of the classifiers, a feature selection method selecting the most relevant features based on their information gain is first employed to select the most informative features and then the size of the acquired feature vector is reduced to five using a self-organizing-map. Figure 8 summarizes the data processing strategy for object recognition using a conventional classifier.

**Figure 8.** Process of object classification using conventional classifiers.
