Next Article in Journal
A Novel Fuzzy Logic Switched MPC for Efficient Path Tracking of Articulated Steering Vehicles
Previous Article in Journal
Adaptive Path Planning for Subsurface Plume Tracing with an Autonomous Underwater Vehicle
Previous Article in Special Issue
Reducing Hand Kinematics by Introducing Grasp-Oriented Intra-Finger Dependencies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Prosthesis Control: Identification of Locomotion Activities through EEG-Based Measurements

1
Department of Economics, Engineering, Society and Business Organization, University of Tuscia, 01100 Viterbo, Italy
2
Department of Mechanical, Mechatronics, and Manufacturing Engineering, University of Engineering and Technology, Lahore 38000, Pakistan
3
Human-Centered Robotics Lab, National Centre of Robotics and Automation, Lahore 45200, Pakistan
*
Authors to whom correspondence should be addressed.
Robotics 2024, 13(9), 133; https://doi.org/10.3390/robotics13090133
Submission received: 18 July 2024 / Revised: 29 August 2024 / Accepted: 30 August 2024 / Published: 1 September 2024
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)

Abstract

:
The integration of advanced control systems in prostheses necessitates the accurate identification of human locomotion activities, a task that can significantly benefit from EEG-based measurements combined with machine learning techniques. The main contribution of this study is the development of a novel framework for the recognition and classification of locomotion activities using electroencephalography (EEG) data by comparing the performance of different machine learning algorithms. Data of the lower limb movements during level ground walking as well as going up stairs, down stairs, up ramps, and down ramps were collected from 10 healthy volunteers. Time- and frequency-domain features were extracted by applying independent component analysis (ICA). Successively, they were used to train and test random forest and k-nearest neighbors (kNN) algorithms. For the classification, random forest revealed itself as the best-performing one, achieving an overall accuracy up to 92%. The findings of this study contribute to the field of assistive robotics by confirming that EEG-based measurements, when combined with appropriate machine learning models, can serve as robust inputs for prosthesis control systems.

1. Introduction

The loss of limbs, often resulting from trauma, illness, or congenital conditions, significantly impacts an individual’s mobility and quality of life [1,2]. Advances in prosthetic technology have significantly improved the functional capabilities and comfort of amputees, facilitating greater independence and participation in daily activities [3].
Among these innovations, the utilization of electroencephalography (EEG) signals to control prosthetic devices represents the leading edge of innovative research [4,5]. EEG, a non-invasive technique that records electrical activity along the scalp, provides a direct window into the brain’s activity, offering insights into a user’s intentions [6]. Harnessing these signals, research has been dedicated to design innovative systems able to measure neural activities and convert them into prosthetic devices’ movements, enabling individuals with limb impairments to interact with the world in unprecedented ways [7]. EEG-based approaches have overcome the traditional methods based on residual muscle activity gathered from surface electromyography or external switches, which are often characterized by poor movement accuracy and a restricted range of motion [8,9]. Several studies have shown a significant incidence of issues that amputees face when using conventional control methods, with up to 70% reporting difficulties, such as comfort and stability [10]. Instead, EEG-based prosthetic control systems are directly fed with signals associated with motor intentions. Through signal processing algorithms and machine learning techniques, these systems decode neural patterns, discerning commands for various movements such as walking, running, or even fine tasks associated with the upper limbs [11]. This not only enhances the naturalness and fluidity of prosthetic movements but also empowers users with greater autonomy and functionality in their daily life. Moreover, the versatility of EEG-based prosthetic control extends beyond mere movement execution.
Thus, it is clear how the EEG signal identification and classification of lower-limb activities has gained significant attraction during the last two decades [12,13]. Thus, several studies have been published to understand the feasibility of classifying gait-related parameters through the analysis of EEG signals. Among others, Liu et al. [14] explored the use of BCI to decode lower-limb movement intention from EEG signals, aiming to promote motor recovery and brain plasticity. The study focused on continuous classification and asynchronous detection of movement-related cortical potentials during self-initiated ankle plantar flexion tasks. In comparison to current online detection techniques, the suggested framework showed a greater true-positive rate, fewer false positives, and comparable latencies [14]. Chai et al. [15] attempted to identify gait-related movements in subjects walking with and without an exoskeleton by analyzing the EEG signals. By using features related to mu- and beta-frequency bands, an average classification accuracy of 74% was obtained for the testing sets [15]. A comparison among different neural network algorithms fed with EEG-based features was proposed in [16], revealing the support vector machine model as the best one, achieving accuracy greater than 98% in the recognition of gait phases. A similar approach was proposed by Bodda et al. for the identification of the stance and swing phase utilizing the analysis of EEG activity [17]. The intention of walking movement has been discriminated by using a support vector machine with RBF kernel, achieving an overall accuracy of 73% when the algorithm was fed with features extracted from EEG signals gathered from six healthy subjects and two amputees [18]. To automatically identify unstable gait patterns, Soangra and colleagues [19] applied both machine learning and deep learning algorithms to implement a BCI able to prevent falls, finding the recurrent network as the best solution with an accuracy greater than 80% [19]. A support vector machine classifier combined with a directed acyclic graph was validated to move a prosthetic leg according to specific gait trajectories [20]. By testing the method on three healthy subjects, it was observed that all of them were able to smoothly walk on floors and stairs. It is clear that all the previously reported articles are focused on specific parameters of the gait, such as gait phases or stability, rather than the discrimination of different walking activities. In addition, no studies have been published to understand the performance of EEG-based algorithms when walking on irregular terrains; in fact, all the papers only evaluated the performance in controlled environments.
Despite the plethora of studies and the advantages of using EEG for developing control systems of prostheses, challenges persist in guaranteeing the widespread adoption of EEG-controlled prosthetics, ranging from signal variability to the need to test the methodology in uncontrolled environments over irregular terrains. To the best of the authors’ knowledge, the comparison among different machine learning algorithms to identify the best method for the identification of different locomotion tasks in different walking terrains outdoors is still lacking, since all the studies focused on level walking in a laboratory. Therefore, the contribution of this study is to fill this gap by comparing the performance of different machine learning algorithms, particularly random forest and k-nearest neighbors, in classifying locomotion activities based on EEG data. The findings of this study are expected to advance the field by providing insights into the most effective algorithms for implementing EEG-based prosthetic control systems.

2. Materials and Methods

2.1. Participants

Ten healthy male young adults aged from 18 to 22 years old were enrolled in the study. The participants were selected while avoiding subjects with physical and/or neurological disorders. The whole content of the experiment was explained, and a written informed consent was obtained from all the subjects. The protocol was compliant with the ethical standards outlined in the Declaration of Helsinki and it was approved by the BioEtichs Committee of the University of Engineering and Technology, Lahore—Pakistan with n. 4138 (29 October 2019).

2.2. Experimental Setup and Procedure

To measure the EEG signals, the commercially available 14-channel Emotiv EPOC headset was used [21]. The device is able to track the electrical impulses produced by the brain using 14 electrodes placed at AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4 positions, according to the international 10–20 system [22]. According to the system, the left hemisphere brain signals are acquired through odd numbers, whereas even numbers are used for the right side. Two additional electrodes, which are CMS and DRL on the left and right side, respectively, were used to reduce the noise. Table 1 indicates the area covered by the used electrodes and the specific information gathered from each electrode.
Figure 1 reports the sensor system and the location of the electrodes on the scalp.
The device provides a 14-bit resolution for each EEG channel, ensuring precise data capture. The sampling frequency of the Emotiv EPOC can be configured to either 128 Hz or 256 Hz, with the latter used in this study. This frequency is sufficient for capturing the relevant brainwave activity, particularly in the alpha- and beta-frequency bands, which are commonly analyzed in EEG studies. For data transmission, the Emotiv EPOC uses a wireless Bluetooth connection. The data is transmitted in real time to a USB dongle connected to a computer, allowing for live monitoring and analysis of EEG signals. The device is powered by an internal rechargeable battery, which provides approximately 12 h of continuous operation. This device works with a software package that features a user-friendly interface that not only shows the user the contact quality of the electrodes with the scalp but also show the EEG quality of the brain activities. The Emotiv Epoc headset has been already validated for brain–computer interface applications [23].
A skilled operator placed the Emotiv Epoc headset on the scalp of each participant, adjusting it to ensure the correct placement of each electrode on the relative area. It is worth noting that the proper positioning of the headset over the scalp yields the desired quality of the acquired signals. The hair and the skin of the user act as a damper for the detection of electrical activity; thus, a conductive gel is used to reduce the mismatch of the impedance. Before starting the experimental procedure, the EEG quality was tested using the dedicated software, assuring the achievement of quality equal to 100%, which corresponds to full contact between the headset and the scalp. During this preliminary test, participants were instructed to remain relaxed and avoid excessive movement during data collection.
After that, participants were involved in the actual experimental procedure during which raw EEG data were acquired. Specifically, each participant was asked to perform a variety of locomotion activities: (i) walking on level ground (W); (ii) ascending stairs (AS); (iii) descending stairs (DS); (iv) ascending a ramp (AR); and (v) descending a ramp (DR). Each task was repeated three times. The participants wore their normal daily shoes and performed the movements on their self-selected speed. Rest periods were considered between each task to minimize fatigue and check EEG data quality. To avoid bias in the results due to the protocol sequence, the order of tasks was randomized across participants.

2.3. Data Processing and Analysis

Figure 2 depicts the flowchart starting from the data acquisition to the machine learning application. As observed, the flowchart also includes: (i) pre-processing and (ii) feature extraction and artifact removal. All the analyses were conducted in Matlab (R2022a, The MathWorks Inc., Natick, MA, USA) using the EEGLAB toolbox (version 14.1.0) [24].

2.3.1. Pre-Processing

The first step of the data analysis involved the removal of certain frequencies. Although different types of filters were initially considered, the Butterworth band-pass filter was consistently selected for all subsequent analyses due to its superior performance, as illustrated in Figure 3. Specifically, a 6th-order Butterworth band-pass filter, which is known for its smooth frequency response and lack of ripples, was applied to isolate and analyze alpha (8–13 Hz) and beta (13–30 Hz) waves. This filtering enhances the signal-to-noise ratio, making it particularly suitable for detecting brain activity patterns relevant to locomotion, according to the literature [12]. In addition, it has been demonstrated that the filtering allows enhancing of the signal-to-noise ratio, permitting a more reliable and robust detection of brain activity patterns, a fundamental aspect for the application of machine learning algorithms [19].

2.3.2. Feature Extraction and Artifact Removal

After filtering, data were analyzed to extract the features for the application of the machine learning algorithms. Feature extraction has been carried out by applying the independent component analysis (ICA). The data-driven method known as ICA is effective for blind source separation and signal decomposition. The idea is to separate these sources by taking advantage of their statistical independence, and the observed data are assumed to be a linear combination of unknown-source signals [25]. ICA is frequently used in the context of EEG (electroencephalography) to separate the recorded EEG data into independent components, each of which represents a different brain source or activity [26]. Specifically, the acquired EEG signal for each subject and each locomotion task is organized into a matrix where rows represent different channels and columns represent time points. Given a 14-channel EEG signal X 14×T where T is the number of time points, we applied the information maximization (informax) algorithm [26,27] to decompose it into 14 independent components S 14×T. The decomposition of the matrix followed Equation (1):
W · X = S
where X is the matrix acquired through the EEG system, W represents the unmixing matrix, and S the sources, also called independent components—ICs. Thus, the 14-channel signals were decomposed into 14 ICs, ordering them in accordance to decreasing variances [28].
An example of the decomposition is reported in Figure 4.
After the decomposition, the EEGLAB toolbox allowed for the automatic labelling of the independent components. Specifically, eye blinks, eye movements, muscle activity, electrocardiographic artifacts, and other brain-related elements can be distinguished by using a statistical approach [26].
A further artifact removal method was applied by using the ADJUST algorithm of the ICA. While ICA is used for breaking down EEG signals into independent components, ADJUST focuses on automatic artifact detection and removal. Thus, informax ICA is typically used as initial step to break down the EEG signals, whereas ADJUST is applied to further clean the data. The ADJUST algorithm, proposed in [29], computed a set of spatial and temporal features for each IC extracted by informax ICA. In particular, the algorithm used six features:
  • Spatial Average Difference (SAD)—A measure of the spatial distribution of ICs in terms of the difference between central and peripheral electrodes. It measures the average difference in voltage between the front and back parts of the scalp.
  • Variance Time Course (VTC)—A measure of the temporal variance in ICs. It measures rapid fluctuation in the EEG signals.
  • Range of Time Course (RTC)—A measure of the range of the IC time course. It measures the difference between the maximum and minimum values of the time course of an independent component.
  • Spatial Variance of the Average (SVA)—A measure of the spatial variance of the average activity of ICs. It calculates the variance in the scalp map averaged across all electrodes, helping to identify components where the distribution of activity deviates significantly from the typical spatial patterns observed in neural signals.
  • Kurtosis of Time Course (KTC)—A measure of the tailedness of the IC time course distribution, assessing the temporal characteristics of the EEG signal. A higher kurtosis value indicates a more peaked and potentially spiky distribution.
  • Spatial Entropy (SE)—A measure of the spatial distribution entropy of ICs. It quantifies how uniformly the signal energy is distributed over the scalp.
Table 2 reported the selected threshold for each feature.
The predefined threshold was, then, applied to the extracted features to classify each IC as either artifact or non-artifact. After that, decision rules based on the combinations of features were used for the final classification according to Table 3, where T indicates the threshold value previously reported.
Following the rules, the ICs classified as artifacts were removed, and the cleaned EEG data were reconstructed by back-projecting the remaining ICs.

2.3.3. Activity Recognition through Machine Learning

To automatically identify the locomotion activities from the EEG data, a comparison between two supervised machine learning algorithms was performed. In particular, random forest (RF) and k-nearest neighbor (kNN) were compared.
RF is a commonly used supervised machine learning algorithm belonging to the binary classifier category [30,31]. The algorithm works by combining the outputs of multiple uncorrelated decision trees to produce a single classification. Specifically, each individual tree classifies a specific class, and the one with the most votes is considered the final prediction. Thanks to the bootstrap aggregating method, RF allows us to improve the performance of a single decision tree algorithm [32]. When designing the model, one main hyperparameter must be selected, which is the number of trees, also known as the number e of estimators. In our application, we selected e equal to 100 after conducting preliminary tests with various values (e.g., 50, 100, 150, and 200). The selection of 100 estimators was based on achieving a balance between classification accuracy and computational efficiency. Our tests revealed that increasing the number of estimators beyond 100 provided only marginal improvements in accuracy, with diminishing returns in performance. For instance, while using 150 or 200 estimators slightly improved the accuracy, the gains were not significant enough to justify the additional computational cost, particularly in real-time applications such as prosthesis control, where processing speed is crucial.
k-NN is one of the most widespread supervised machine learning algorithms for physical activity recognition, belonging to the geometric classifiers [33]. The algorithm works by considering each combination of features as a point in an n-dimensional space, where n indicates the number of classes to be classified. The classification is based on understanding which is the most common class among the k-nearest neighbors with the general aim to maximize the distance with respect to the other classes. When designing the model, two main hyperparameters must be selected, which are the type of computed distance and the number k of neighbors considered for the computation. In our application, we selected k equal to 5, and the Euclidean distance is considered for the computation of the distance between neighbors.
The selected algorithm was compared by using two different datasets, with the former composed of independent components extracted by the informax ICA and the latter composed of the independent components extracted by the ADJUST algorithm. Hereinafter, the first dataset is referred to as ICs and the second one as adj_ICs. The performance of the algorithm was determined by applying a cross-validation, dividing the dataset into 80% for training and 20% for the test. To apply the supervised methodology, each datapoint was manually labelled with the related class, as in Table 4.
The performance of each classifier, independently per each tested dataset, was computed by using common metrics, which involved the computation of the confusion matrix. A 5 × 5 confusion matrix, where 5 indicates the number of classes to be classified, was obtained for each classifier, independently per each dataset and each algorithm. Starting from the confusion matrix, the following synthetic indices were computed:
  • Accuracy A—being the ratio between the correctly predicted locomotion activities and the total data. Accuracy is an overall index that takes into account all the five classes together. The computation of the accuracy follows Equation (2):
A = TP + TN TP + FP + TN + FN
where TP, TN, FP, and FN are the true positive, true negative, false positive, and false negative, respectively.
  • Recall R—being the ratio between the TP and the sum of TP and FN, specific per each class, according to Equation (3):
R = TP TP + FN
  • Precision P—being the ratio between the TP and the sum of TP and FP, specific per each class, according to Equation (4):
P = TP TP + FP
  • F1-score—being the harmonic of R and P, thus specific for each class, according to Equation (5):
F 1 - score = 2 · R · P R + P
A classifier is considered optimum when it is associated with all the above-mentioned indexes by greater than 0.80 [33]. In addition, it is worth noting that by using the reported indices, the robustness of the algorithm to both type I and type II errors was considered. Specifically, accuracy and recall take into account type II, i.e., avoiding false negatives; whereas precision takes into account type I, i.e., avoiding false positives [34].

3. Results and Discussion

An example of the graph obtained by the application of infomax ICA and ADJUST is reported in Figure 5, where it is possible to see the statistical approach revealing the probability that each IC belongs to brain-related activity.
For the sake of completeness, Table 5 reports the number of brain-related features for each activity extracted by the two tested ICA’s algorithms.
Moving to the machine learning algorithms, the results of the performance analysis are depicted in Table 6.
In the case of the random forest (RF) algorithm, the ICs dataset demonstrates an overall accuracy of 0.73, which is lower than the threshold for the optimum classifier [33]. Within this dataset, R is notably high for indices W and DR, achieving values of 0.96 and 0.92, respectively. However, R is lower for DS and AS, with values of 0.25 and 0.47, whereas the AR index exhibits a moderate R of 0.61. The precision values vary, ranging from 0.70 for W to 0.89 for AR, with DS achieving the highest P at 0.79. Finally, the F1-score is highest for DR at 0.87 and lowest for DS at 0.38. When analyzing the adj_ICs dataset with the RF algorithm, there is a marked improvement in overall accuracy, which rises to 0.93, falling in the optimal range [33]. Recall remains very high across all indices, particularly for W (0.99) and DR (0.98), while DS, despite having the lowest R among the indices, still records a substantial 0.80, which is in line with the above-mentioned threshold. P is consistently high across all indices, with values exceeding 0.90. Correspondingly, the F1-score also shows high values, particularly for DR (0.98) and W (0.95). Conversely, the k-nearest neighbors (kNN) algorithm exhibits a lower overall accuracy of 0.64 on the ICs dataset. For this dataset, the R is highest for W (0.84) and DR (0.83), but it is quite low for DS (0.24). Precision for kNN varies moderately, with values ranging from 0.53 for AS to 0.79 for DS. The F1-score is highest for DR at 0.76 and lowest for DS at 0.34. The performance of the kNN algorithm improves when applied to the adj_ICs dataset, achieving an overall accuracy of 0.80. R is highest for DR (0.99) and W (0.91) but remains low for DS (0.27). P is, instead, highest for DS at 0.93 and lowest for DR at 0.67. The F1-score ranges from 0.42 for DS to 0.87 for W.
By comparing the results, the random forest algorithm consistently outperforms kNN in terms of overall accuracy for both datasets. By considering all the metrics together, it is clear that only the RF algorithm on the adj_ICs dataset revealed itself as optimum for all the indices, whereas the kNN algorithm implemented on the adj_ICs dataset does not meet all the performance criteria, although it is characterized by an optimal overall accuracy. In fact, even if high accuracy indicates that the classifier is generally making correct predictions, a lower value of precision does not allow us to exclude a large number of false positives [34]. Considering the final aim of the classifier, which is to be implemented into a control system for prostheses, it is clear how a classifier with a high rate of false positives can severely affect the user experience by causing safety risks, increased cognitive load, and reduced performance [35]. Moreover, the RF algorithm must be preferred to kNN for other theoretical reasons. One of the primary benefits of RF is its robustness and ability to handle a wide variety of data types and structures [31]. Unlike kNN, which relies on the notion of distance metrics and can struggle with high-dimensional data, RF constructs multiple decision trees and merges their results to improve accuracy and control overfitting. This ensemble approach allows RF to manage complex datasets, as EEG data, more effectively and produce more reliable predictions. Another significant advantage of RF is its robustness to noise and overfitting, which is fundamental in the case of the EEG dataset. This is why each tree in the forest is built from a random subset of the data, which helps to mitigate the impact of anomalies and noise, leading to more generalized and stable predictions [31]. Speed and scalability are other areas where RF outperforms kNN. Once a random forest model is trained, making predictions is relatively fast because it involves threshold-based classification rather than computing distances to every training example, as required by kNN. This efficiency makes random forest more suitable for large-scale applications where rapid predictions are necessary, for example, in the real-time control of prostheses [36]. Concerning the comparison between the two ICA algorithms, it is clear that the use of the ADJUST algorithm for the extraction of the independent components allows continuous increase of the performance. This finding is in line with the literature, where it has been proved that ADJUST provides a more robust and reliable method for artifact removal [29]. As a consequence, it has been already proven that by removing artifacts and enhancing the signal-to-noise ratio, data are better suited for training and testing supervised learning models [37]. Finally, the achieved performance of the best-performing algorithm is comparable with the one reported in the literature for the discrimination of other gait-related activities, such as the risk of falls [19], gait phases, [15,16] and gait initiation [18], even if the results have been obtained with different machine learning algorithms. This finding could suggest the development of a control system based on different machine learning algorithms, each tailored to a specific task.
By focusing on the specific errors made by each classification algorithm, in RF based on the ICs dataset, RF struggles notably with detecting ascending stairs and descending stairs movements. This is reflected in its low recall values for these activities, indicating that the algorithm often fails to identify these movements when they occur. This low recall suggests a high rate of false negatives, where actual instances of these movements are missed by the model. The precision of RF across the various movements also highlights its classification errors; for ascending stairs and descending stairs movements, the algorithm shows moderate-to-low precision, meaning that while it identifies some instances correctly, it also misclassifies a significant number of non-relevant instances as these movements. This is evident from the precision scores, where ascending stairs and descending stairs movements have lower values compared to other activities. The adjustment of ICs improves both recall and precision across the board, reducing the occurrence of these false negatives and false positives and resulting in a more balanced performance. kNN, instead, exhibits a different error profile. With the ICs dataset, kNN’s recall is particularly low for ascending stairs and descending stairs movements, similar to that of RF, which suggests that the algorithm frequently misses these movements. This indicates that kNN has a high false-negative rate, where true instances of these movements are not detected. Additionally, kNN shows lower precision for ascending stairs and descending stairs movements, implying that when the algorithm does predict these movements, it often does so incorrectly, leading to a high rate of false positives. With the adjusted ICs dataset, kNN shows improvements in recall for most movements, especially for ascending stairs and descending ramps. This suggests that the adjustments help the algorithm better identify these activities, though challenges remain. Precision also improves, but not uniformly across all movements. Despite these improvements, kNN still exhibits variability in its performance, reflecting a continued presence of classification errors in certain movements. These error patterns highlight the need for further refinement in both algorithms to achieve more reliable and accurate movement detection.
By summarizing, we can affirm that the RF fed with EEG-based features extracted through the ADJUST algorithm for the independent component analysis could be a suitable solution for implementing a control system for the real-time movement of prosthetic limbs also in uncontrolled environments, such as the daily activities involving locomotion on irregular terrains.

Limitations

Although the results are promising, the study presents some limitations that must be addressed to fully validate and generalize the findings. To build on this work, it is crucial to conduct additional research that encompasses a larger and more diverse sample. Expanding the pool of enrolled participants to include individuals with different demographic characteristics will enhance the representativeness of the results and ensure that the findings are broadly applicable. Furthermore, it is important to apply the algorithm by including participants who are actual users of prosthetic devices. This practical testing will provide valuable insights into the algorithm’s effectiveness and usability in everyday scenarios, including various distractions and obstacles and also considering possible optimization for the computational time required by the signal process. Moreover, further machine and deep learning algorithms will be compared. By addressing these aspects, future research can offer a more comprehensive understanding and confirm the algorithm’s practical utility across different contexts and populations.
Finally, due to the spread of flexible electronic sensors in several applications for biological healthy data collection [38,39,40,41], their applicability for EEG acquisition could be tested.

4. Conclusions

Aiming to understand if it is possible to use EEG-based machine learning algorithms to automatically distinguish locomotion tasks, we comparatively examined the random forest and k-nearest neighbors methods trained with brain activity features extracted through two different independent component analyses’ algorithms. The findings demonstrated that the RF algorithm, especially when applied to datasets processed using the ADJUST algorithm for independent component analysis (ICA), consistently outperformed kNN in terms of overall accuracy, recall, precision, and F1-score. In addition, all the metrics overcame the optimal threshold, up to an overall accuracy of 92%, promoting the use of electroencephalography signals combined with machine learning algorithm for controlling prosthetic devices. Future research should focus on testing and refining these algorithms in real-world conditions to ensure the reliability of these systems in uncontrolled environments, as well as to increase the amount of data for realizing a standard training of the model.

Author Contributions

Conceptualization, S.Z., H.F.M., M.I.A., D.J.M., Z.u.A., W.A., and S.R.; Data curation, H.F.M., J.T., and S.R.; Formal analysis, S.Z., H.F.M., M.I.A., D.J.M., Z.u.A., W.A., and S.R.; Methodology, S.Z., H.F.M., M.I.A., D.J.M., Z.u.A., W.A., J.T., and S.R.; Software, J.T.; Supervision, S.Z., H.F.M., J.T., and S.R.; Validation, J.T. and S.R.; Writing—original draft, S.Z., H.F.M., M.I.A., D.J.M., W.A., and J.T.; Writing—review and editing, S.Z., H.F.M., Z.u.A., and S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are available under request to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sarroca, N.; Valero, J.; Deus, J.; Casanova, J.; Luesma, M.J.; Lahoz, M. Quality of Life, Body Image and Self-Esteem in Patients with Unilateral Transtibial Amputations. Sci. Rep. 2021, 11, 12559. [Google Scholar] [CrossRef]
  2. Sinha, R.; van den Heuvel, W.J.; Arokiasamy, P. Factors Affecting Quality of Life in Lower Limb Amputees. Prosthet Orthot. Int. 2011, 35, 90–96. [Google Scholar] [CrossRef] [PubMed]
  3. Morgan, S.J.; Liljenquist, K.S.; Kajlich, A.; Gailey, R.S.; Amtmann, D.; Hafner, B.J. Mobility with a Lower Limb Prosthesis: Experiences of Users with High Levels of Functional Ability. Disabil. Rehabil. 2022, 44, 3236–3244. [Google Scholar] [CrossRef]
  4. Jamil, N.; Belkacem, A.N.; Ouhbi, S.; Lakas, A. Noninvasive Electroencephalography Equipment for Assistive, Adaptive, and Rehabilitative Brain–Computer Interfaces: A Systematic Literature Review. Sensors 2021, 21, 4754. [Google Scholar] [CrossRef]
  5. Ahmed, F.; Iqbal, H.; Nouman, A.; Maqbool, H.F.; Zafar, S.; Saleem, M.K. A Non Invasive Brain-Computer-Interface for Service Robotics. In Proceedings of the 2023 3rd International Conference on Artificial Intelligence (ICAI), Islamabad, Pakistan, 22–23 February 2023; IEEE: Piscataway, NJ, USA; pp. 142–147. [Google Scholar]
  6. Castiblanco Jimenez, I.A.; Gomez Acevedo, J.S.; Marcolin, F.; Vezzetti, E.; Moos, S. Towards an Integrated Framework to Measure User Engagement with Interactive or Physical Products. Int. J. Interact. Des. Manuf. (IJIDeM) 2023, 17, 45–67. [Google Scholar] [CrossRef]
  7. Li, P.; Qian, Y.; Si, N. Electroencephalogram and Electrocardiogram in Human-Computer Interaction. In Proceedings of the 2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA), Dalian, China, 28 October 2022; IEEE: Piscataway, NJ, USA; pp. 646–654. [Google Scholar]
  8. Gehlhar, R.; Tucker, M.; Young, A.J.; Ames, A.D. A Review of Current State-of-the-Art Control Methods for Lower-Limb Powered Prostheses. Annu. Rev. Control 2023, 55, 142–164. [Google Scholar] [CrossRef]
  9. Fleming, A.; Stafford, N.; Huang, S.; Hu, X.; Ferris, D.P.; Huang, H. Myoelectric Control of Robotic Lower Limb Prostheses: A Review of Electromyography Interfaces, Control Paradigms, Challenges and Future Directions. J. Neural Eng. 2021, 18, 041004. [Google Scholar] [CrossRef]
  10. Gomez-Vargas, D.; Ballen-Moreno, F.; Barria, P.; Aguilar, R.; Azorín, J.M.; Munera, M.; Cifuentes, C.A. The Actuation System of the Ankle Exoskeleton T-FLEX: First Use Experimental Validation in People with Stroke. Brain Sci. 2021, 11, 412. [Google Scholar] [CrossRef] [PubMed]
  11. Orban, M.; Elsamanty, M.; Guo, K.; Zhang, S.; Yang, H. A Review of Brain Activity and EEG-Based Brain–Computer Interfaces for Rehabilitation Application. Bioengineering 2022, 9, 768. [Google Scholar] [CrossRef]
  12. Tariq, M.; Trivailo, P.M.; Simic, M. EEG-Based BCI Control Schemes for Lower-Limb Assistive-Robots. Front. Hum. Neurosci. 2018, 12, 312. [Google Scholar] [CrossRef]
  13. Kline, A.; Forkert, N.D.; Felfeliyan, B.; Pittman, D.; Goodyear, B.; Ronsky, J. FMRI-Informed EEG for Brain Mapping of Imagined Lower Limb Movement: Feasibility of a Brain Computer Interface. J. Neurosci. Methods 2021, 363, 109339. [Google Scholar] [CrossRef]
  14. Liu, D.; Chen, W.; Lee, K.; Chavarriaga, R.; Iwane, F.; Bouri, M.; Pei, Z.; Millan, J. del R. EEG-Based Lower-Limb Movement Onset Decoding: Continuous Classification and Asynchronous Detection. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1626–1635. [Google Scholar] [CrossRef]
  15. Chai, J.; Chen, G.; Thangavel, P.; Dimitrakopoulos, G.N.; Kakkos, I.; Sun, Y.; Dai, Z.; Yu, H.; Thakor, N.; Bezerianos, A.; et al. Identification of Gait-Related Brain Activity Using Electroencephalographic Signals. In Proceedings of the 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), Shanghai, China, 25–28 May 2017; IEEE: Piscataway, NJ, USA; pp. 548–551. [Google Scholar]
  16. Wei, P.; Zhang, J.; Tian, F.; Hong, J. A Comparison of Neural Networks Algorithms for EEG and SEMG Features Based Gait Phases Recognition. Biomed. Signal Process. Control 2021, 68, 102587. [Google Scholar] [CrossRef]
  17. Bodda, S.; Maya, S.; Em Potti, M.N.; Sohan, U.; Bhuvaneshwari, Y.; Mathiyoth, R.; Diwakar, S. Computational Analysis of EEG Activity during Stance and Swing Gait Phases. Procedia Comput. Sci. 2020, 171, 1591–1597. [Google Scholar] [CrossRef]
  18. Hasan, S.M.S.; Siddiquee, M.R.; Atri, R.; Ramon, R.; Marquez, J.S.; Bai, O. Prediction of Gait Intention from Pre-Movement EEG Signals: A Feasibility Study. J. NeuroEng. Rehabil. 2020, 17, 50. [Google Scholar] [CrossRef] [PubMed]
  19. Soangra, R.; Smith, J.A.; Rajagopal, S.; Yedavalli, S.V.R.; Anirudh, E.R. Classifying Unstable and Stable Walking Patterns Using Electroencephalography Signals and Machine Learning Algorithms. Sensors 2023, 23, 6005. [Google Scholar] [CrossRef] [PubMed]
  20. Gao, H.; Luo, L.; Pi, M.; Li, Z.; Li, Q.; Zhao, K.; Huang, J. EEG-Based Volitional Control of Prosthetic Legs for Walking in Different Terrains. IEEE Trans. Autom. Sci. Eng. 2021, 18, 530–540. [Google Scholar] [CrossRef]
  21. Vokorokos, L.; Madoš, B.; Ádám, N.; Baláž, A. Data Acquisition in Non-Invasive Brain-Computer Interface Using Emotiv Epoc Neuroheadset. Acta Electrotech. Inform. 2012, 12, 5. [Google Scholar] [CrossRef]
  22. Homan, R.W.; Herman, J.; Purdy, P. Cerebral Location of International 10–20 System Electrode Placement. Electroencephalogr. Clin. Neurophysiol. 1987, 66, 376–382. [Google Scholar] [CrossRef]
  23. LaRocco, J.; Le, M.D.; Paeng, D.-G. A Systemic Review of Available Low-Cost EEG Headsets Used for Drowsiness Detection. Front. Neuroinform. 2020, 14, 553352. [Google Scholar] [CrossRef] [PubMed]
  24. Brunner, C.; Delorme, A.; Makeig, S. Eeglab–an Open Source Matlab Toolbox for Electrophysiological Research. Biomed. Eng./Biomed. Tech. 2013, 58, 000010151520134182. [Google Scholar] [CrossRef]
  25. Hyvärinen, A.; Oja, E. Independent Component Analysis: Algorithms and Applications. Neural Netw. 2000, 13, 411–430. [Google Scholar] [CrossRef] [PubMed]
  26. Delorme, A.; Makeig, S. EEGLAB: An Open Source Toolbox for Analysis of Single-Trial EEG Dynamics Including Independent Component Analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  27. Sai, C.Y.; Mokhtar, N.; Arof, H.; Cumming, P.; Iwahashi, M. Automated Classification and Removal of EEG Artifacts with SVM and Wavelet-ICA. IEEE J. Biomed. Health Inform. 2018, 22, 664–670. [Google Scholar] [CrossRef]
  28. Malik, A.N.; Iqbal, J.; Tiwana, M.I. EEG Signals Classification and Determination of Optimal Feature-Classifier Combination for Predicting the Movement Intent of Lower Limb. In Proceedings of the 2016 2nd International Conference on Robotics and Artificial Intelligence (ICRAI), 1–2 November 2016; IEEE: Piscataway, NJ, USA; pp. 45–49. [Google Scholar]
  29. Mognon, A.; Jovicich, J.; Bruzzone, L.; Buiatti, M. ADJUST: An Automatic EEG Artifact Detector Based on the Joint Use of Spatial and Temporal Features. Psychophysiology 2011, 48, 229–240. [Google Scholar] [CrossRef] [PubMed]
  30. Mannini, A.; Sabatini, A.M. Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers. Sensors 2010, 10, 1154–1175. [Google Scholar] [CrossRef] [PubMed]
  31. Ho, T.K. Random Decision Forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; pp. 278–282. [Google Scholar]
  32. Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical Human Activity Recognition Using Wearable Sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef]
  33. Taborri, J.; Palermo, E.; Rossi, S. Automatic Detection of Faults in Race Walking: A Comparative Analysis of Machine-Learning Algorithms Fed with Inertial Sensor Data. Sensors 2019, 19, 1461. [Google Scholar] [CrossRef] [PubMed]
  34. Neyman, J.; Pearson, E.S.; Yule, G.U. The Testing of Statistical Hypotheses in Relation to Probabilities a Priori. Math. Proc. Camb. Philos. Soc. 1933, 29, 492. [Google Scholar] [CrossRef]
  35. Bai, O.; Kelly, G.; Fei, D.-Y.; Murphy, D.; Fox, J.; Burkhardt, B.; Lovegreen, W.; Soars, J. A Wireless, Smart EEG System for Volitional Control of Lower-Limb Prosthesis. In Proceedings of the TENCON 2015-2015 IEEE Region 10 Conference, Macao, China, 1–4 November 2015; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  36. Maqbool, H.F.; Husman, M.A.B.; Awad, M.I.; Abouhossein, A.; Iqbal, N.; Dehghani-Sanij, A.A. A Real-Time Gait Event Detection for Lower Limb Prosthesis Control and Evaluation. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1500–1509. [Google Scholar] [CrossRef]
  37. Stalin, S.; Roy, V.; Shukla, P.K.; Zaguia, A.; Khan, M.M.; Shukla, P.K.; Jain, A. A Machine Learning-Based Big EEG Data Artifact Detection and Wavelet-Based Removal: An Empirical Approach. Math. Probl. Eng. 2021, 2021, 2942808. [Google Scholar] [CrossRef]
  38. Ma, B.; Huang, K.; Chen, G.; Tian, Y.; Jiang, N.; Zhao, C.; Liu, H. A Dual-Mode Wearable Sensor with Coupled Ion and Pressure Sensing. Soft Sci. 2024, 4, 1–9. [Google Scholar] [CrossRef]
  39. Kim, K.H.; Kim, J.H.; Ko, Y.J.; Lee, H.E. Body-Attachable Multifunctional Electronic Skins for Bio-Signal Monitoring and Therapeutic Applications. Soft Sci. 2024, 4, 24. [Google Scholar] [CrossRef]
  40. Jan, A.A.; Kim, S.; Kim, S. A Skin-Wearable and Self-Powered Laminated Pressure Sensor Based on Triboelectric Nanogenerator for Monitoring Human Motion. Soft Sci. 2024, 4, 10. [Google Scholar] [CrossRef]
  41. Dong, W.; Yang, L.; Gravina, R.; Fortino, G. Soft Wrist-Worn Multi-Functional Sensor Array for Real-Time Hand Gesture Recognition. IEEE Sens. J. 2022, 22, 17505–17514. [Google Scholar] [CrossRef]
Figure 1. (a) The hardware of EMOTIV Epoc headset, (b) the position of the 14 electrodes.
Figure 1. (a) The hardware of EMOTIV Epoc headset, (b) the position of the 14 electrodes.
Robotics 13 00133 g001
Figure 2. Flowchart of the data processing and analysis.
Figure 2. Flowchart of the data processing and analysis.
Robotics 13 00133 g002
Figure 3. Frequency response of different filters in the EEGLAB toolbox.
Figure 3. Frequency response of different filters in the EEGLAB toolbox.
Robotics 13 00133 g003
Figure 4. EEG signals (Left) and independent components (Right) (x-axis represents the time in seconds, while the y-axis represents the micro voltage measured by each electrode and ICs).
Figure 4. EEG signals (Left) and independent components (Right) (x-axis represents the time in seconds, while the y-axis represents the micro voltage measured by each electrode and ICs).
Robotics 13 00133 g004
Figure 5. (a) Example of informax ICA results related to ascending stairs, (b) example of ADJUST results related to descending stairs. The number on the scalp stands for the specific components found by the independent component analysis, whereas the percentage indicates the confidence associated with the type of identified component. The colors on the scalp represent different levels of voltage, with warmer colors indicating higher levels of electrical potential, whereas blue and green indicate lower levels of activity. Black curves indicate isopotential lines, closer lines indicate steeper gradients of electrical potential, whereas widely spaced lines indicate more gradual changes.
Figure 5. (a) Example of informax ICA results related to ascending stairs, (b) example of ADJUST results related to descending stairs. The number on the scalp stands for the specific components found by the independent component analysis, whereas the percentage indicates the confidence associated with the type of identified component. The colors on the scalp represent different levels of voltage, with warmer colors indicating higher levels of electrical potential, whereas blue and green indicate lower levels of activity. Black curves indicate isopotential lines, closer lines indicate steeper gradients of electrical potential, whereas widely spaced lines indicate more gradual changes.
Robotics 13 00133 g005
Table 1. Areas of the brain covered by all the electrodes.
Table 1. Areas of the brain covered by all the electrodes.
ElectrodesBrain AreaRole
AF3, AF4, F3, F4, F7, F8lobus frontalisDetect cognitive function related to decision-making and motor planning
FC5, FC6, T7, T8lobus temporalisDetect brain activities associated with complex motor coordination
P7, P8lobus parietalisIntegrate sensory information and managing spatial navigation
O1, O2lobus occipitalisCapture visual processing signals that influence balance and movements
Table 2. Temporal and spatial features computed through ADJUST and related threshold value T.
Table 2. Temporal and spatial features computed through ADJUST and related threshold value T.
FeatureThreshold Value (T)
Spatial Average Difference (SAD)15 µV
Variance of the Time Course (VTC)0.1
Range of the Time Course (RTC)3
Spatial Variance of the Average (SVA)2
Kurtosis of the Time Course (KTC)3
Spatial Entropy (SE)2
Table 3. Artifact classification rules.
Table 3. Artifact classification rules.
ArtifactFeatures
Eye blinkSAD > T, KTC > T, SE < T
Vertical eye movementsVTC > T
Horizontal eye movementsRTC > T
General discontinuities (muscle activity)VTC > T, RCT > T
Table 4. Classes associated with each locomotion activity. W stands for level ground walking, AS for ascending stairs, DS for descending stairs, AR for ascending ramp, and DR for descending ramp.
Table 4. Classes associated with each locomotion activity. W stands for level ground walking, AS for ascending stairs, DS for descending stairs, AR for ascending ramp, and DR for descending ramp.
WASDSARDR
Class01234
Table 5. Number of independent components related to brain activity for each locomotion task. W stands for level ground walking, AS for ascending stairs, DS for descending stairs, AR for ascending ramp, and DR for descending ramp.
Table 5. Number of independent components related to brain activity for each locomotion task. W stands for level ground walking, AS for ascending stairs, DS for descending stairs, AR for ascending ramp, and DR for descending ramp.
WASDSARDR
ICs1312131110
adj_ICs10111173
Table 6. Performance indices for the machine learning algorithms with both training datasets. W stands for level ground walking, AS for ascending stairs, DS for descending stairs, AR for ascending ramp, and DR for descending ramp. THe overall accuracy is reported in bold for reading convenience.
Table 6. Performance indices for the machine learning algorithms with both training datasets. W stands for level ground walking, AS for ascending stairs, DS for descending stairs, AR for ascending ramp, and DR for descending ramp. THe overall accuracy is reported in bold for reading convenience.
AlgorithmDatasetIndexWASDSARDR Overall
RFICs Accuracy -----0.73
Recall0.960.470.250.610.92-
Precision0.700.740.790.890.83-
F1-score0.810.580.380.730.87-
adj_ICs Accuracy -----0.93
Recall0.990.940.800.800.98-
Precision0.910.930.930.900.98-
F1-score0.950.930.860.850.98-
kNNICs Accuracy -----0.64
Recall0.840.350.240.710.83-
Precision0.680.530.610.540.70-
F1-score0.750.430.340.610.76-
adj_ICs Accuracy -----0.80
Recall0.910.930.270.550.99-
Precision0.830.740.930.840.67-
F1-score0.870.820.420.670.80-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zafar, S.; Maqbool, H.F.; Ashraf, M.I.; Malik, D.J.; Abdeen, Z.u.; Ali, W.; Taborri, J.; Rossi, S. Towards Prosthesis Control: Identification of Locomotion Activities through EEG-Based Measurements. Robotics 2024, 13, 133. https://doi.org/10.3390/robotics13090133

AMA Style

Zafar S, Maqbool HF, Ashraf MI, Malik DJ, Abdeen Zu, Ali W, Taborri J, Rossi S. Towards Prosthesis Control: Identification of Locomotion Activities through EEG-Based Measurements. Robotics. 2024; 13(9):133. https://doi.org/10.3390/robotics13090133

Chicago/Turabian Style

Zafar, Saqib, Hafiz Farhan Maqbool, Muhammad Imran Ashraf, Danial Javaid Malik, Zain ul Abdeen, Wahab Ali, Juri Taborri, and Stefano Rossi. 2024. "Towards Prosthesis Control: Identification of Locomotion Activities through EEG-Based Measurements" Robotics 13, no. 9: 133. https://doi.org/10.3390/robotics13090133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop