**1. Introduction**

Prosthetic legs have significantly enhanced the lifestyle of individuals with a transfemoral amputation. Prostheses help lower-limb amputees regain their walking mobility for activities such as level walking, stair ascent and descent, incline walking, sitting and standing, etc. One active research area is the development of a functional control system for each walking task [1–3]. The main design objective is to enable amputees to achieve walking that is similar to that of able-bodied persons, while minimizing metabolic energy expenditure. Challenges include recognizing gait modes automatically, selecting the appropriate control system corresponding to the identified gait mode, and achieving a smooth transition in real time. Activity mode recognition must be achieved in parallel with control system development to address these problems. Activity mode recognition is referred to as high-level control, while control system design for each walking activity is referred to as low-level control [4]. The focus of this paper is the development of a high-level control system.

In the design of an intent recognition system, several questions arise, including which input signals and machine learning algorithms will provide a UIR system with fast and reliable prediction performance. Previous research has addressed these questions in different ways. For instance, surface electromyography (sEMG) signals were used to train UIR [5,6]. Although sEMG resulted in high classification accuracy, Ref. [7] reported uncertain performance due to sEMG signal variability in real-world conditions. Variation could be because of electrode shift [8], skin temperature change [9], or muscle volume change [10]. Therefore, external sensors on the prosthesis have received significant attention. For instance, classifiers have been trained with data collected from mechanical sensors [11], optical distance sensors [12], and inertial measurement units [7]. In addition, Refs. [13,14] showed that the fusion of sensory measurements could enhance learning, although the amputee subject could be inconvenienced by wearing additional sensors. Various supervised machine learning algorithms have been implemented to build UIR systems, including linear discriminant analysis (LDA) [15], quadratic discriminant analysis (QDA) [16], Gaussian mixture models (GMMs) [11], support vector machines (SVMs) [14], and artificial neural networks (ANNs) [5]. To avoid the need for user-specific classifier training, Ref. [17] proposed a user-independent UIR system in which classifier performance is robust to user-specific characteristics.

Current UIR systems have been designed with one goal in mind: highest possible prediction accuracy. In clinical applications, it is extremely important that UIR can accurately predict activity modes with substantially different characteristics because misclassification can cause a loss of balance [7,18]. However, there remains a gap in the design of UIR with low complexity. UIR has low complexity if it can be implemented with only significant features extracted from minimal sensing hardware. UIR with low complexity is important because such systems: (1) eliminate unneeded body-worn sensors that may be irritating and cumbersome; (2) avoid numerical instability and overfitting during training; (3) are robust to noisy measurement signals and sensor failures; and (4) decrease computational effort, which is important for real-time operation. These reasons have motivated previous research to apply feature selection to design UIR with low complexity [7,19]. They used sequential forward selection to obtain a subset with only the most informative features. In contrast, in this paper, we develop a new framework for UIR that *simultaneously* achieves maximum accuracy and minimum complexity. Complexity and accuracy are two conflicting objectives. To the best of our knowledge, this paper is the first attempt to find a compromise solution for this problem using multi-objective optimization.

The main contributions of this paper are two-fold: (1) a new MOO method called GMOFS for optimal feature subset selection; and (2) the application of four evolutionary MOBBO methods for the UIR problem, including vector evaluated BBO (VEBBO), non-dominated sorting BBO (NSBBO), niched Pareto BBO (NPBBO), and strength Pareto BBO (SPBBO). We have chosen to use BBO in this paper as the evolutionary algorithm (EA) because of its demonstrated effectiveness and recent popularity for optimizing real-world problems [20,21]. MOBBO methods have the potential to find the global optimum [22,23]; however, they are computationally expensive due to the many required fitness function evaluations. To avoid this drawback, we propose GMOFS for feature selection.

Several different types of feature selection methods have been proposed. *Filter* methods are feature selection methods that assess the quality of a subset of features independently or with respect to the output class [24]. *Wrapper* methods are feature selection methods that assess the quality of a subset of features by measuring the prediction accuracy of a classifier that is trained with that subset [25]. *Embedded* methods are feature selection methods that overcome the disadvantages of filter and wrapper methods. Unlike filter methods, embedded methods account for the bias of the classifier, and, unlike wrapper methods, they are computationally efficient [26,27]. Various embedded feature selection algorithms have been proposed, mostly for linear problems with a single objective [27,28]. Embedded methods also incorporate regularization algorithms, such as ridge regression [29], least absolute shrinkage and selection [30], and elastic nets [31].

GMOFS is our newly proposed embedded method that simultaneously performs feature selection and classification, and that accounts for multiple objectives in nonlinear systems such as UIR. GMOFS incorporates an elastic net in multilayer perceptron (MLP) neural network training. The elastic net uses a Lagrange multiplier with a complexity parameter to reduce the feature set to an optimal subset, and the MLP classifier is trained with the optimal subset. We investigate the influence of the complexity parameter on the solution of the constrained MLP optimization problem. We then use the optimization solutions to obtain a GMOFS Pareto front, which is a set of non-dominated solutions that are equally important apart from the designer's subjective preference of objectives.

Section 2 presents a general framework for UIR. In Section 2.1, an informative set of signals reflecting various walking tasks are collected experimentally from three able-bodied and three amputee subjects. In addition, the data are filtered and processed to eliminate noise and missing data points. In Section 2.2, we use both disjoint windowing and overlapped windowing to extract data frames. The length of the data frame and the increment of the moving window are chosen to compromise the informativeness of the data and the computational effort, while taking real-time computational constraints into account. In Section 2.3, various time-domain (TD) and frequency-domain (FD) features are extracted from each data frame for each measurement signal. A training data set is obtained in which all features are normalized to have a zero mean and unity variance. In Section 2.4, we use a pre-selection approach to exclude insignificant features, and then apply MOO for final feature selection. We implement GMOFS and four variants of MOBBO to minimize the size of the selected feature subset and maximize the prediction accuracy. In Section 2.5, the performance of several classifiers, including LDA, QDA, SVMs with both linear and radial basis function (RBF) kernels, MLPs, and decision trees (DTs), are compared, and the best one is selected for UIR. In Section 2.6, majority voting filter (MVF) is implemented to avoid sudden jumps between identified classes and enhance UIR performance. Section 4 discusses the experimental setup and classification results for the optimally designed UIR system. Finally, Section 5 discusses conclusions and future work.
