**2. Related Works**

Numerous works have investigated human activity recognition (HAR) in the last decade. The methodologies used recently can generally be classified into two dominant categories: (i) traditional classifiers; (ii) deep learning methods.

For the first category, numerous classiffiers have been investigated. Parri et al. [11] proposed a fuzzy-logical classifier to identify lower limb locomotion mode, with the assistance of gait phases. The authors developed a lower limb wearable robot system that can help impaired people to perform locomotion activity. Chen et al. [12] proposed a robust activity recognition algorithm based on principal component analysis (PCA) and on-line support vector machine (OSVM), the algorithm obtained a robust recognition accuracy over a smartphone dataset collected in six different orientations. In the work [13], the authors compared the performances among the classifiers of SVM, Naive Bayes, k-Nearest Neighbour (kNN) and kStar. Results showed that kNN and kStar obtained the highest accuracy while Naive Bayes obtained the lowest. Zhao et al. [14] proposed a 2-layer model to detect six gait phases of walking, the algorithm used Neural Network (NN) to provide a pre-decision of gait phases to Hidden Markov Model (HMM), the final decision of gait phase from HMM obtained an accuracy of 98.11%. The limitation of this study is that only the activity of walking was considered, and the authors only tested their algorithm on straight forward walking, not free walking. In [15], hidden semi-Markov model (HSMM) and semi-Markov conditional random field (SMCRF) were applied to recognize human activity in smart home. The results showed that HSMM consistently outperformed HMM, while SMCRF obtained a similar result to CRF. However, because daily activities at home do not have stationary property, it is not practical to use a stationary transition matrix to represent the activity switches. Moreover, the authors only used Gaussian density to represent the conditioned observation density, which is quite limited to a complex scenario.

In the second category, deep learning-based methodologies are very prevalent. Generally, this kind of method is more inclined to image processing, so it needs to convert sensor data to image description to support extraction of discriminative features [16]. As reported in [17], convolutional neural network (CNN) is an important category of discriminative deep learning model for HAR. The work [18] proposed convolutional recurrent neural network to recognise daily activity; their algorithm gained an improvement of 6% compared to the state-of-the-art works. Recently, as reported in [19], transfer learning and semantic approach have raised great research interest. Bao [20] and Rokni [21] used transfer learning to automatically construct model for newly added wearable sensors; they obtained an accuracy enhancement between 9.3–10%. However, the recognition accuracy highly depends on the performance of labeling from source devices, thus it still requires a reliable method for recognition on a single sensor.

Some other methods can also be applied to dedicated applications and obtain good results. Schneider et al. [22] proposed an automatic extraction and selection method of highly relevant features, the method was tested on eight datasets and obtained a general accuracy over 90%. Rezaie et al. [23] proposed a feedback controller framework to adapt the sampling rate for better efficiency and higher accuracy. Dao et al. [24] introduced a man-in-loop decision architecture and data sharing among users and gradually obtained a high accuracy.

In fact, people perform lower limb locomotion activities every day, such as moving from one place to another place and doing sports like running and cycling.... There are a lot of methods that have been proposed for HAR, while to our best knowledge, very few methods can be found that are specially designed for lower limb locomotion activities, including but not limited to activities like walking and jogging [25].
