Next Article in Journal
Is It Possible to Reshape the Body and Tone It at the Same Time? Schwarzy: The New Technology for Body Sculpting
Next Article in Special Issue
Automated Atrial Fibrillation Detection with ECG
Previous Article in Journal
Elastodontic Devices in Orthodontics: An In-Vitro Study on Mechanical Deformation under Loading
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EMD-Based Method for Supervised Classification of Parkinson’s Disease Patients Using Balance Control Data

1
Computer Science Department, Strasbourg University, 67081 Strasbourg, France
2
College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
3
Laboratory ARM, EA BIOTN, UPEC, CHU Henri Mondor, 94000 Cŕeteil, France
*
Author to whom correspondence should be addressed.
Bioengineering 2022, 9(7), 283; https://doi.org/10.3390/bioengineering9070283
Submission received: 16 March 2022 / Revised: 22 June 2022 / Accepted: 22 June 2022 / Published: 28 June 2022
(This article belongs to the Special Issue Featured Papers in Computer Methods in Biomedicine)

Abstract

:
There has recently been increasing interest in postural stability aimed at gaining a better understanding of the human postural system. This system controls human balance in quiet standing and during locomotion. Parkinson’s disease (PD) is the most common degenerative movement disorder that affects human stability and causes falls and injuries. This paper proposes a novel methodology to differentiate between healthy individuals and those with PD through the empirical mode decomposition (EMD) method. EMD enables the breaking down of a complex signal into several elementary signals called intrinsic mode functions (IMFs). Three temporal parameters and three spectral parameters are extracted from each stabilometric signal as well as from its IMFs. Next, the best five features are selected using the feature selection method. The classification task is carried out using four known machine-learning methods, KNN, decision tree, Random Forest and SVM classifiers, over 10-fold cross validation. The used dataset consists of 28 healthy subjects (14 young adults and 14 old adults) and 32 PD patients (12 young adults and 20 old adults). The SVM method has a performance of 92% and the Dempster–Sahfer formalism method has an accuracy of 96.51%.

1. Introduction

The key function of the human postural system is to stabilize the human body in any static or moving form. This is accomplished by considering external perturbations for the static posture that is also called quiet standing as well as during locomotion. The human postural system uses the interactions between the central nervous system, the musculoskeletal system and three sensory systems, the vestibular, visual and proprioception systems, to maintain the body in its upright position [1,2,3,4,5].
Parkinson’s disease (PD) is one of the most common movement disorder diseases that cause damage in the nervous system. As a result, human postural stability is affected and the human is more susceptible to suffering physical injuries. The main cause of this disease is degradation of the motor control and malfunctioning of the rhythm generation in the basal ganglia. This affects the postural stability during quiet standing and locomotion [6,7].
PD has been the subject of many research studies focused on quiet standing and dynamic postures [8,9,10,11,12]. Data-mining techniques can be used for feature extraction from collected data to provide the classification [13] of PD and non-PD subjects [14,15].
Center-of-pressure (COP) displacements are used to analyse and evaluate the postural stability of the human body in quiet standing. COP displacements are usually recorded in two directions, in the right/left (medial–lateral) direction and in the forward/backward (anterior–posterior) direction of the human body.
Analyzed center-of-pressure (COP) output measures usually lack sensitivity. Thus, the standard spatiotemporal analysis of the COP may provide only descriptive information without any direct insight into underlying control deficits.
In [16,17], Stodolka Tanaka et al. proposed a new methodology to assess postural stability through the center-of-pressure (COP) trajectories during quiet standing. New sensitive parameters were extracted and then utilized to investigate changes in postural stability with respect to visual input. The experimental data consists of stabilometric signals of eleven healthy subjects (20–27 years). These signals were recorded under eyes-open and eyes-closed conditions using a force platform during quiet standing. The proposed approach was applied separately for medial–lateral and anterior–posterior stabilometric signals.
In [17], the stabilometric signals were modeled for each subject and for each condition as an auto-regressive (AR) model. This is achieved for each direction (medial–lateral (ML) and anterior–posterior (AP)) separately, and the order of the AR models was practically fixed at M = 20. The new measures (the percentage contributions and geometrical moment of AR coefficients) were obtained from the estimation of the AR model parameters. They showed statistically significant differences between open-eyes and closed-eyes conditions. The quiet standing under eyes-open conditions showed higher correlation between present and past COP displacements compared to the eyes-closed conditions. In contrast, no significant differences between vision conditions were found for conventional classical parameters (the total length of the COP path, mean velocity). The results showed that the AR parameters are useful for assessing postural stability during static posture for visual conditions.
In [14], Palmerini et al. used accelerometer-based data recorded from control and PD subjects to analyze posture in a quiet stance. First, 175 measures were computed from time and frequency domains, and feature selections with classification techniques were then used to select the best parameters that discriminate between control and PD subjects. Two parameters were selected to clearly separate the control subjects from the PD subjects. Note that the feature extraction, feature selection and training phases generally require additional computational time, which can be annoying for real-time analysis.
In an attempt to diverge from the standard COP characteristics approach, Blaszczyk assessed human postural stability by using force-plate posturography [18]. The work focused on a dataset consisting of 168 subjects who were grouped into three categories: young adults, older adults and patients with PD. The subjects were requested to stand still and to have their eyes open and then to stand still with their eyes closed. To better understand postural stability, the authors introduced three new output measurements: the sway ratio (SR), the sway directional index (DI) and the sway vector (SV). The inputs to the system were: age, pathology and visual conditions. These variables highly affected the measured outputs of the method. They resulted in distinctive differences between eyes-open and eyes-closed groups, young adults and old adults groups, young adults and PD groups, and between old adults and PD groups. As a conclusion to this work, the use of sway vector is recommended as a suitable variable in assessing postural control in quiet standing.

Empirical Mode Decomposition

In 1998, Norden Huang, a NASA engineer, proposed a nonlinear method called Empirical Mode Decomposition (EMD) to analyze nonlinear and non-stationary signals [19,20,21]. This method is comparable with Fourier and wavelet transforms where a signal is composed of several elementary signals. Different than other methods, EMD breaks up any given signal into a finite number (N) of oscillating components extracted directly without any a priori condition. The resulting components are the intrinsic mode functions (IMFs). These functions are non-stationary wave-forms. The range of frequencies from highest to lowest present in the signal is represented by the IMFs. The IMFs are oscillating functions that have zero mean.
Any signal can be written as:
S i g n a l = k = 1 N I M F k + r K
where I M F k is the kth IMF and r K is the residual signal.
An IMF (intrinsic mode function) is an amplitude-modulated and frequency-modulated signal that is represented with the following characteristics:
  • The number of local maxima and the number of local minima differ at most by one or are equal otherwise.
  • The mean of the lower and upper envelopes is approximately null everywhere.
EMD is an iterative approach where the first component (IMF) is obtained from the original signal and the estimation of the second IMF is obtained from the residual signal, and so on. The significance of the EMD method in our methodology is based on the fact that this method decomposes the stabilometric signal into elementary signals based on frequency bands. This enables us to analyze, separately, each signal (IMF) that holds a specific frequency band. This strategy gives information in depth about the characteristics of a the stabilometric signal to reach the best classification results.

2. Supervised Machine-Learning Approaches

Machine learning (ML) is a sub-category of the artificial-intelligence (AI) technique used to increase the system knowledge [22]. The ML technique provides computers with the ability to learn autonomously. It is mainly categorized into three categories: (1) supervised, (2) unsupervised and (3) semi-supervised learning approaches [23]. Supervised algorithms (SAs) takes the needed input and outputs from humans. During the training process, an SA provides feedback about the prediction accuracy. SA is widely used in data classification process for different applications such as early detection and prediction of diabetes [24,25,26,27], prediction of Alzheimer’s Disease [28,29,30,31], detection of Acute Respiratory Distress Syndrome [32,33,34] and EEG Signal Processing [35,36,37,38].
This section introduces and discusses briefly the main supervised learning approaches.
  • KNN
One of the simplest and high performing methods used in supervised classification is k-nearest neighbors (KNN) [39]. It is one of the applied non-parametric approaches that are used in different systems such as weather prediction [40], facial expression classification [41], eye movement detection [42] and prediction of hospital readmission for diabetic patients [43]. For this method, the classification of a new individual occurs by:
(1)
Computing the distance between this individual and all other individuals in the dataset used for training. This distance function is the similarity measure for the new case.
(2)
Choosing the most common class among k-nearest neighbors for the new case in order to be able to classify it.
  • CART
The classification and regression tree (CART) method is referred to as the decision tree algorithm commonly used in machine learning [44]. The high interest in using this method is due to the simplicity, efficiency and easy interpretation of the algorithm. The algorithm identifies the non-linear relationships between the input and outputs of a given system. The decision tree is composed of nodes and branches classifying the variables in recursive partitions. The leaf node does not project any branching. This method was successfully implemented in many fields [45,46].
  • RF
The Random Forests are a family of methods which consist of the construction, as its name suggests, of a set (or forest) of decision trees. In [47], Breiman combines the bagging method which is an abbreviation of “bootstrap aggregating” and the random selection of the partitioning variable of each node to give a new method called random forests [48,49,50].
Combining these processes improves the classification performance of a single tree classifier [51,52]. The assignment of a new observation vector to a class is based on a majority vote of the different decisions provided by each tree constituting the forest.
  • SVM
Support Vector Machine (SVM) is another widely recognized and widely used method for supervised learning due to its high accuracy [53,54]. It is usually ranked among the best classifiers giving the best results for resolving binary discrimination and classification problems [55,56,57]. The objective of SVM is to determine a hyperplane in an N-dimensional space to classify the given data points. For example, if the data is linearly separable, then we find the hyperplane (separator) f ( x ) = w T x + b that differentiates the positive observations ( y i = + 1 ) from the negative observations ( y i = 1 ). It also maximizes as much as possible the distance between the support vectors and the hyperplane. The margin SVM should be equal to twice the distance between the hyperplane and the support vectors.

3. Methodology

In this section, we introduce the data acquisition process; then, we explain the proposed methodology that is used for the classification of the healthy subjects and those with PD.

3.1. Data Acquisition

The Mondor Hospital in Creteil, France, was the facility where the data were acquired and the experiments were conducted. The resulting dataset is extracted from 28 healthy subjects (14 young adults and 14 old adults) and 32 PD patients (12 young adults and 20 old adults). The dataset included stabilometric signals from those 60 specimens. Details about the PD population are represented in Table 1.
Healthy and PD individuals were requested to perform quiet standing during the recording trials of the stabilometric signals in the AP and ML directions for 60 s. ML trajectories are the center-of-pressure movements in the right/left directions of the human body. AP trajectories are the center-of-pressure movements in the forward/backward directions.
A six-components force plate (60 × 40 cm, strain-gauge-based device from Bertec Corporation, Columbus, OH, USA) with sampling rate of 1000 Hz is used to do the measurements.

3.2. Proposed Classification Process

The method is applied by extracting the EMD-based temporal and spectral features from the stabilometric signals. MATLAB programming language is used to implement all stages of this methodology. Four main stages are needed in this process:
  • Breaking down the stabilometric signal using EMD, and obtaining a set of IMFs. The first eight IMFs are used in processing and in feature extraction, see Figure 1.
  • Feature extraction: extract three time-domain features, standard deviation ( σ t ), skewness ( β t ) and kurtosis ( K u r t t ), and then three frequency-domain features, spectral centroid ( C s ), spectral skewness ( β s ) and spectral kurtosis ( K u r t s ). These features are extracted from the stabilometric signals and from their IMFs to compare the classification results.
  • Characteristics selection: selecting the first five relevant characteristics that represent the postural sway of healthy subjects and subjects with PD using the random forest algorithm.
  • Machine-learning applications: using the four machine-learning approaches described before: KNN, CART, RF and Support Vector Machine (SVM) for classifying the healthy subjects and subjects with PD using 10-fold cross validation.
In summary, The importance of the methodology is shown in having the EMD method decompose the Stabilometric signal based on frequency bands. This helps to analyze, separately, each signal that holds a specific frequency band (IMF). We use temporal and frequency features to extract both spectral and temporary behaviors for each frequency band (IMF) from the original signal. This gives information in depth about the characteristics of a stabilometric signal to reach the best classification results.
Figure 2 shows the process of the proposed approach for classifying the healthy subjects and subjects with PD.

3.3. Features Extraction

The three temporal features studied in this research are: the standard deviation σ t of the signal as stated in Equation (2). Equation (3) shows how to calculate the skewness β t , which is the variable that evaluates the asymmetry of the probability distribution of the data.
Kurtosis K u r t t is another measurement that finds the tailedness of the probability distribution, as illustrated in Equation (4).
σ t = 1 N i = 1 N ( x ( i ) μ ) 2
where N is the number of samples in a given signal x, and μ is the mean value.
β t = 1 N i = 1 N ( x ( i ) μ σ ) 3
K u r t t = 1 N i = 1 N ( x ( i ) μ σ ) 4
Next, three spectral features are extracted form the stabilometric signal itself and from its IMFs.
The spectral energy distribution of the data is simply characterised by the features extracted from the signal. Equation (5) shows the spectral centroid C s , which is the center of mass of the spectrum that is commonly used in connection with the brightness of sound and for analysis of the musical timbre.
To calculate the tailedness and the asymmetry of the spectral energy distribution, the spectral kurtosis K u r t s and the spectral skewness β s are used as shown in Equations (6) and (7).
C s = w w P ( w ) w P ( w )
where P ( w ) is the amplitude of the w frequency bin of the spectrum.
β s = w ( w C s σ s ) 3 P ( w ) w P ( w )
where S i g m a s is the mean square root of the spectral variation.
K u r t s = w ( w C s σ s ) 4 P ( w ) w P ( w )

3.4. Performance Evaluation

The performance evaluation of the proposed method lies under the ability of classifying healthy subjects and subjects with PD. To achieve this evaluation, four key elements should be measured. These elements depend on the number of true positives and true negatives as well as the number of false positives and false negatives. The elements are:
1—Accuracy, which is calculated using Equation (8).
A c c u r a c y = T p + T n T p + T n + F p + F n ,
2—Recall, which is calculated using Equation (9).
r e c a l l = T p T p + F p ,
3—Precision criteria, which is calculated using Equation (10).
p r e c i s i o n = T p T p + F n ,
where:
  • T p represents the number of true positive examples;
  • T n represents the number of true negative examples;
  • F p represents the number of false positive examples;
  • F n represents the number of false negative examples.
4—F-measure, which is calculated using Equation (11).
F β m e a s u r e = ( 1 + β 2 ) . r e c a l l . p r e c i s i o n β 2 r e c a l l + p r e c i s i o n ,
where F β is a special measure to focus more on the precision and recall for a given single score. To be able to give equal importance to the precision and recall, β is given the value 1.
A supervised learning strategy is used with the stabilometric data to classify healthy subjects and subjects with PD. As such, the data labels were utilized in both the training stage and the testing stage of the models. In our approach, a 10-fold cross-validation method was used to generate the training dataset and the testing dataset.

4. Experimental Results

4.1. Results and Discussions

As part of the proposed method, stabilometric signals are first decomposed to IMFs signals using the EMD method. Next, the feature extraction step is performed on raw data and EMD data (IMFs) to compare the obtained results as descriptive results and results for our proposed study, respectively. The features are categorized into two groups: the first is a group of three time-domain features, standard deviation ( σ t ), skewness ( β t ) and kurtosis ( K u r t t ), and the second groups consists of three frequency-domain features, spectral centroid ( C s ), spectral skewness ( β s ) and spectral kurtosis ( K u r t s ).
In this work, 12 (6 × 2) characteristics and 96 (6 × 2 × 8) characteristics are calculated, respectively, from the raw data and the EMD data. As the number of features is relatively high, a feature selection step is needed to choose only the best five features to use them as input for the classification methods. To carry on this process, a minimal subset of features that are required needs to be collected. This subset should be adequate to precisely differentiate between the healthy subjects and subjects with PD. Therefore, the use of a Random Forest feature selection method is key to obtaining the most relevant features from all the extracted features. In the Random Forest feature selection method, the score calculation phase includes the prediction performance of the model. It also reorders the features according to their calculated scores. The classifier inputs for the raw data and the EMD data are chosen as a set of five relevant features showing the best scores.

4.1.1. Acquired Results Using Data Obtained

The acquired results are obtained using the features extracted from the raw data, the EMD data and both the EMD and the raw data together. In this example, a total of 48 (12 × 4), 384 (96 × 4) and 432 (12 × 4 + 96 × 4) features are used in the raw data, the EMD data and the raw and the EMD data, respectively.
Table 2 shows the results obtained using extracted features from the EMD data. What can be concluded from this table is that the SVM method results in the best performance when it comes to accuracy, precision, F-measure and recall. The RF approach and K-NN approach come after, by a slightly lower margin, whereas the CART approach shows the worst performance of all four methods. In this table we also see that the recognition rate is at least 78% and up to 91%.
Table 3 shows the results obtained using extracted features from the raw data. What can be concluded from this table is that the RF method results in the best performance when it comes to accuracy, precision, F-measure and recall. The SVM approach and the K-NN approach come after by a slightly lower margin, whereas the CART approach shows the worst performance of all four methods. In this table we also see that the recognition rate is at least 73% and up to 80%.
Table 4 shows the results obtained using extracted features from the raw data and the EMD data. What can be concluded from this table is that the RF method results in the best performance when it comes to accuracy, precision, F-measure and recall. The SVM approach and the K-NN approach come after, by a slightly lower margin, whereas the CART approach shows the worst performance of all four methods. In this table, we also see that the recognition rate is at least 80% and up to 94%.
As a conclusion concerning the results obtained in the three tables and in Figure 3 focusing on the extracted features from the raw data, the EMD data and the raw and EMD data, the highest recognition rate is obtained using the EMD data or the EMD and the raw data. Features extracted only from the raw data gives the worst results in the given system.

4.1.2. Obtained Results Using Classifier Combination Methods

As presented above, several classification methods were used to recognize healthy from PD subjects based on stabilometric data. These classifiers may give different decisions for the same observation. Therefore, combining the outputs (decisions) of these classifiers may lead to a significant improvement in the classification task. In the classifier combination context, the objective is not to reduce redundancy in the information from several classifiers (decisions), but instead, it is used in order to improve decision making. In this study, three well known methods of classifier combination are used and compared, namely: fusion based on Bayesian formalism method, fusion based on majority voting rule method and fusion based on Dempster–Sahfer formalism method.
Table 5 summarizes the results obtained using the classifier combination methods presented above. As shown in the table, the recognition rates obtained with the classifier combination methods are greater than 94%. These results show that the classifier combination methods allow improving the classification performance compared to those obtained with each classifier independently. By analyzing the combination results, it can be observed that the methods based on the Bayesian formalism and Dempster–Sahfer formalism present almost similar results and give better performance than the method based on majority vote. This can be explained by the fact that the Dempster–Sahfer and Bayes methods take into account the errors of each classifier, which is not the case for the majority vote method.
In order to analyze the confusion that can occur between classes, the global confusion matrix obtained using classifier combination based on Dempster–Sahfer formalism is given in Table 6. It can be observed that the healthy subjects are well classified with a correct classification rate of 97.32%. It can also be observed that in most cases the confusion is made between healthy and PD patients by considering the PD patients as healthy subjects with an error rate of 4.11%. This can be explained by the fact that some subjects are in the early stage of Parkinson’s disease and may be confused with healthy persons.
This study shows the performance of applying the EMD method on stabilometric data based on extracting temporal and spectral features from the resulting IMFs. The obtained results demonstrate the superiority of the proposed method (EMD-based method) over the classical method based on feature extraction directly from the raw data. In the proposed method, SVM succeeded in differentiating healthy from PD subjects in 91.08% of cases. It was capable of classifying correctly 91 out of 100 subjects. In addition, the developed strategy based on combining results from more than one classifier succeeded in improving the classification accuracy up to 96.51% for the Dempster–Sahfer method. This means that our approach provides a better classification of 95 out of 100 subjects. This result proves the superiority of the proposed method over the classical method that achieves in its best case an accuracy of 80.49%. Moreover, the 96.51% value should be acceptable from a clinical point of view because the margin error equal to 3.5% is very logical. According to that, this method can be used to help physicians diagnose PD disease and make decisions concerning an accurate and effective treatment.

4.2. Comparison with Other Studies

Mei et al. [58] present a taxonomy of the most relevant studies using machine-learning techniques for the Diagnosis of Parkinson’s Disease. They extracted 209 studies and investigated their aims, sources of data, types of data, machine-learning methods and associated outcomes.
The authors of [59] proposed a new approach for Parkinson’s disease classification based on data partitioning with the feature selection algorithm principal component analysis (PCA). They considered two classifications: healthy and Parkinson’s disease. They used a combination of SVM, and weighted k-NN classifiers and they obtained an accuracy of 89.23%. Celik et al. [60] proposed an approach to improve the diagnosis of Parkinson’s Disease using machine-learning methods. They performed a comparison between different classification methods such as Extra Trees, Logistic Regression, Gradient Boosting and Random Forest, and Support Vector Machine to predict Parkinson’s disease with 76% accuracy. Another study proposed by [61] consists of feature-selection and classification processes. For the feature-selection part, they used Feature Importance and Recursive Feature Elimination methods. For the classification process, they applied Classification and Regression Trees, Artificial Neural Networks and Support Vector Machines. They achieved an accuracy of 93.84%. Our proposed method provides an accuracy level of 96%.

5. Conclusions and Perspectives

In this paper, we introduced a strategy to classify healthy subjects and subjects with PD. The proposed approach consists of four main stages: stabilometric data decomposition using EMD, extraction of temporal and spectral features, selection of the features and classification using SVM, RF, KNN and CART methods. The obtained results show that the proposed approach can reach correct classification rates in up to 96% of cases in terms of classifying healthy and PD subjects. Using the EMD-based data, results show better classification rates than using classical strategies based on extracting features from raw data. This methodology could be used in future studies to distinguish between PD levels to help physicians detect the disease in earlier stages.
Two limitations can be mentioned: The first is the potential overfitting due to feature selection. The second is the comparability between the healthy subjects and the PD patients regarding age, and how close the PD population is to the target group of this model, mainly PD around the time of first diagnosis. As an extension of this study, the proposed methodology could be applied to a larger dataset for verification and validation purposes. A larger dataset should contain PD subjects from all age stages and from all PD levels to ensure that they represent the PD population. In addition, other feature selection algorithms should be applied to compare them and to choose the best one that gives the highest accuracy and avoid the over-fitting problems.

Author Contributions

Conceptualization, K.S. and H.K.; methodology, K.S.; software, K.S. and W.H.F.A.; validation, K.S., M.A., M.G. and E.H.; formal analysis, K.S. and M.G.; investigation, K.S.; resources, M.G. and E.H.; data curation, K.S., M.G. and E.H.; writing—original draft preparation, K.S. and H.K.; writing—review and editing, M.A. and W.H.F.A.; visualization, K.S.; supervision, E.H.; project administration, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Experimental Data was provided by the ARM Laboratory ARM, CHU Henri Mondor, Cŕeteil France.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PDParkinson’s disease
EMDEmpirical mode decomposition
I F M s Intrinsic mode functions
KNNK-Nearest Neighbor
SVMSupport Vector Machine
COPCenter of pressure
ARAuto-regressive
APAnterior–posterior
MLMedial–lateral

References

  1. Karmali, F.; Goodworth, A.D.; Valko, Y.; Leeder, T.; Peterka, R.J.; Merfeld, D.M. The role of vestibular cues in postural sway. J. Neurophysiol. 2021, 125, 672–686. [Google Scholar] [CrossRef] [PubMed]
  2. Maurer, C.; Mergner, T.; Bolha, B.; Hlavacka, F. Vestibular, visual, and somatosensory contributions to human control of upright stance. Neurosci. Lett. 2000, 281, 99–102. [Google Scholar] [CrossRef]
  3. Mergner, T.; Maurer, C.; Peterka, R. A multisensory posture control model of human upright stance. Prog. Brain Res. 2003, 142, 189–201. [Google Scholar] [PubMed]
  4. Mohebbi, A.; Amiri, P.; Kearney, R.E. Contributions of Vision in Human Postural Control: A Virtual Reality-based Study. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine &Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3347–3350. [Google Scholar]
  5. Peterka, R. Sensorimotor integration in human postural control. J. Neurophysiol. 2002, 88, 1097–1118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Safi, K.; Mohammed, S.; Attal, F.; Amirat, Y.; Oukhellou, L.; Khalil, M.; Gracies, J.M.; Hutin, E. Automatic Segmentation of Stabilometric Signals Using Hidden Markov Model Regression. IEEE Trans. Autom. Sci. Eng. 2018, 15, 545–555. [Google Scholar] [CrossRef]
  7. Safi, K.; Mohammed, S.; Albertsen, I.M.; Delechelle, E.; Amirat, Y.; Khalil, M.; Gracies, J.M.; Hutin, E. Automatic analysis of human posture equilibrium using empirical mode decomposition. Signal Image Video Process. 2017, 11, 1081–1088. [Google Scholar] [CrossRef]
  8. Morone, G.; Iosa, M.; Cocchi, I.; Paolucci, T.; Arengi, A.; Bini, F.; Marinozzi, F.; Ciancarelli, I.; Paolucci, S.; De Angelis, D. Effects of a posture shirt with back active correction keeper on static and dynamic balance in Parkinson’s disease. J. Bodyw. Mov. Ther. 2021, 28, 138–143. [Google Scholar] [CrossRef]
  9. Malin, K. Power Training for Improvement of Postural Stability and Reduction of Falls in Individuals With Parkinson Disease. Top. Geriatr. Rehabil. 2021, 37, 12–16. [Google Scholar] [CrossRef]
  10. Wilczyński, J.; Pedrycz, A.; Mucha, D.; Ambroży, T.; Mucha, D. Body posture, postural stability, and metabolic age in patients with Parkinson’s disease. BioMed Res. Int. 2017, 2017, 3975417. [Google Scholar] [CrossRef] [Green Version]
  11. Bekkers, E.M.; Dockx, K.; Devan, S.; Van Rossom, S.; Verschueren, S.M.; Bloem, B.R.; Nieuwboer, A. The impact of dual-tasking on postural stability in people with Parkinson’s disease with and without freezing of gait. Neurorehabilit. Neural Repair 2018, 32, 166–174. [Google Scholar] [CrossRef] [Green Version]
  12. Pereira, A.P.S.; Marinho, V.; Gupta, D.; Magalhães, F.; Ayres, C.; Teixeira, S. Music therapy and dance as gait rehabilitation in patients with parkinson disease: A review of evidence. J. Geriatr. Psychiatry Neurol. 2019, 32, 49–56. [Google Scholar] [CrossRef]
  13. Kanj, S. Learning Methods for Multi-Label Classification. Ph.D. Thesis, Université de Technologie de Compiègne, Université Libanaise, Beirut, Lebanon, 2013. [Google Scholar]
  14. Palmerini, L.; Rocchi, L.; Mellone, S.; Valzania, F.; Chiari, L. Feature selection for accelerometer-based posture analysis in Parkinson’s disease. IEEE Trans. Inf. Technol. Biomed. 2011, 15, 481–490. [Google Scholar] [CrossRef] [PubMed]
  15. Brewer, B.; Pradhan, S.; Carvell, G.; Delitto, A. Feature selection for classification based on fine motor signs of parkinson’s disease. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 214–217. [Google Scholar]
  16. Stodółka, J.; Blach, W.; Vodicar, J.; Maćkała, K. The characteristics of feet center of pressure trajectory during quiet standing. Appl. Sci. 2020, 10, 2940. [Google Scholar] [CrossRef]
  17. Tanaka, H.; Nakashizuka, M.; Uetake, T.; Itoh, T. The effects of visual input on postural control mechanisms: An analysis of center-of-pressure trajectories using the auto-regressive model. J. Hum. Ergol. 2000, 29, 15–25. [Google Scholar]
  18. Blaszczyk, J.W. The use of force-plate posturography in the assessment of postural instability. Gait Posture 2016, 44, 1–6. [Google Scholar] [CrossRef]
  19. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  20. Rilling, G.; Flandrin, P.; Goncalves, P. On empirical mode decomposition and its algorithms. In Proceedings of the IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing, IEEER, Grado, Italy, 8–11 June 2003; Volume 3, pp. 8–11. [Google Scholar]
  21. Rilling, G. Décompositions Modales Empiriques. Contributions à la théorie, L’algorithmie et L’analyse de Performances. Ph.D. Thesis, Ecole Normale supérieure de Lyon-ENS LYON, Lyon, France, 2007. [Google Scholar]
  22. Saravanan, R.; Sujatha, P. A state of art techniques on machine learning algorithms: A perspective of supervised learning approaches in data classification. In Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 14–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 945–949. [Google Scholar]
  23. Gupta, R. A Survey on Machine Learning Approaches and Its Techniques. In Proceedings of the 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 22–23 February 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  24. Chauhan, T.; Rawat, S.; Malik, S.; Singh, P. Supervised and Unsupervised Machine Learning based Review on Diabetes Care. In Proceedings of the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 1, pp. 581–585. [Google Scholar]
  25. Rubaiat, S.Y.; Rahman, M.M.; Hasan, M.K. Important feature selection & accuracy comparisons of different machine learning models for early diabetes detection. In Proceedings of the 2018 International Conference on Innovation in Engineering and Technology (ICIET), Dhaka, Bangladesh, 27–28 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
  26. Hasan, D.A.; Zeebaree, S.R.; Sadeeq, M.A.; Shukur, H.M.; Zebari, R.R.; Alkhayyat, A.H. Machine Learning-based Diabetic Retinopathy Early Detection and Classification Systems-A Survey. In Proceedings of the 2021 1st Babylon International Conference on Information Technology and Science (BICITS), Babil, Iraq, 28–29 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 16–21. [Google Scholar]
  27. Kopitar, L.; Kocbek, P.; Cilar, L.; Sheikh, A.; Stiglic, G. Early detection of type 2 diabetes mellitus using machine learning-based prediction models. Sci. Rep. 2020, 10, 11981. [Google Scholar] [CrossRef] [PubMed]
  28. Neelaveni, J.; Devasana, M.G. Alzheimer disease prediction using machine learning algorithms. In Proceedings of the 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 6–7 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 101–104. [Google Scholar]
  29. Uysal, G.; Ozturk, M. Using Machine Learning Methods for Detecting Alzheimer’s Disease through Hippocampal Volume Analysis. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  30. Almubark, I.; Chang, L.C.; Nguyen, T.; Turner, R.S.; Jiang, X. Early detection of Alzheimer’s disease using patient neuropsychological and cognitive data and machine-learning techniques. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5971–5973. [Google Scholar]
  31. Sivakani, R.; Ansari, G.A. Machine Learning Framework for Implementing Alzheimer’s Disease. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 0588–0592. [Google Scholar]
  32. Reamaroon, N.; Sjoding, M.W.; Lin, K.; Iwashyna, T.J.; Najarian, K. Accounting for label uncertainty in machine learning for detection of acute respiratory distress syndrome. IEEE J. Biomed. Health Inform. 2018, 23, 407–415. [Google Scholar] [CrossRef] [PubMed]
  33. Sabeti, E.; Drews, J.; Reamaroon, N.; Warner, E.; Sjoding, M.W.; Gryak, J.; Najarian, K. Learning using partially available privileged information and label uncertainty: Application in detection of acute respiratory distress syndrome. IEEE J. Biomed. Health Inform. 2020, 25, 784–796. [Google Scholar] [CrossRef] [PubMed]
  34. Sabeti, E.; Drews, J.; Reamaroon, N.; Gryak, J.; Sjoding, M.; Najarian, K. Detection of acute respiratory distress syndrome by incorporation of label uncertainty and partially available privileged information. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1717–1720. [Google Scholar]
  35. Rasheed, K.; Qayyum, A.; Qadir, J.; Sivathamboo, S.; Kwan, P.; Kuhlmann, L.; O’Brien, T.; Razi, A. Machine learning for predicting epileptic seizures using EEG signals: A review. IEEE Rev. Biomed. Eng. 2020, 14, 139–155. [Google Scholar] [CrossRef] [PubMed]
  36. Hosseini, M.P.; Hosseini, A.; Ahi, K. A Review on machine learning for EEG Signal processing in bioengineering. IEEE Rev. Biomed. Eng. 2020, 14, 204–218. [Google Scholar] [CrossRef]
  37. Yang, W.; Joo, M.; Kim, Y.; Kim, S.H.; Chung, J.M. Hybrid Machine Learning Scheme for Classification of BECTS and TLE Patients Using EEG Brain Signals. IEEE Access 2020, 8, 218924–218935. [Google Scholar] [CrossRef]
  38. Bird, J.J.; Kobylarz, J.; Faria, D.R.; Ekárt, A.; Ribeiro, E.P. Cross-domain mlp and cnn transfer learning for biological signal processing: Eeg and emg. IEEE Access 2020, 8, 54789–54801. [Google Scholar] [CrossRef]
  39. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef] [Green Version]
  40. Mantri, R.; Raghavendra, K.R.; Puri, H.; Chaudhary, J.; Bingi, K. Weather Prediction and Classification Using Neural Networks and k-Nearest Neighbors. In Proceedings of the 2021 8th International Conference on Smart Computing and Communications (ICSCC), Kochi, Kerala, India, 1–3 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 263–268. [Google Scholar]
  41. Afriansyah, Y.; Nugrahaeni, R.A.; Prasasti, A.L. Facial Expression Classification for User Experience Testing Using K-Nearest Neighbor. In Proceedings of the 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), Bandung, Indonesia, 27–28 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 63–68. [Google Scholar]
  42. Faris, I.; Utaminingrum, F. Eye Movement Detection using Histogram Oriented Gradient and K-Nearest Neighbors. In Proceedings of the 2021 International Conference on ICT for Smart Society (ICISS), Bandung, Indonesia, 2–4 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar]
  43. Bahanshal, S.; Kim, B. An Optimized Hybrid Fuzzy Weighted k-Nearest Neighbor to Predict Hospital Readmission for Diabetic Patients. In Proceedings of the 2021 IEEE 13th International Conference on Computer Research and Development (ICCRD), Beijing, China, 5–7 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 115–120. [Google Scholar]
  44. Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
  45. Aryuni, M.; Miranda, E.; Bernando, C.; Hartanto, A. Coronary Artery Disease Prediction Model using CART and SVM: A Comparative Study. In Proceedings of the 2021 1st International Conference on Computer Science and Artificial Intelligence (ICCSAI), Jakarta, Indonesia, 28 October 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 1, pp. 72–75. [Google Scholar]
  46. Phyo, P.P.; Jenanunta, C. Daily Load Forecasting based on a Combination of Classification and Regression Tree and Deep Belief Network. IEEE Access 2021, 9, 152226–152242. [Google Scholar] [CrossRef]
  47. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  48. Chai, Z.; Zhao, C. Multiclass oblique Random Forests with dual-incremental learning capacity. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5192–5203. [Google Scholar] [CrossRef] [PubMed]
  49. Gupta, V.K.; Gupta, A.; Kumar, D.; Sardana, A. Prediction of COVID-19 confirmed, death, and cured cases in India using Random Forest model. Big Data Min. Anal. 2021, 4, 116–123. [Google Scholar] [CrossRef]
  50. Chen, X.; Yu, S.; Zhang, Y.; Chu, F.; Sun, B. Machine Learning Method for Continuous Noninvasive Blood Pressure Detection Based on Random Forest. IEEE Access 2021, 9, 34112–34118. [Google Scholar] [CrossRef]
  51. Liu, C.; Gu, Z.; Wang, J. A Hybrid Intrusion Detection System Based on Scalable K-Means+ Random Forest and Deep Learning. IEEE Access 2021, 9, 75729–75740. [Google Scholar] [CrossRef]
  52. Dong, L.; Du, H.; Mao, F.; Han, N.; Li, X.; Zhou, G.; Zheng, J.; Zhang, M.; Xing, L.; Liu, T.; et al. Very high resolution remote sensing imagery classification using a fusion of Random Forest and deep learning technique—Subtropical area for example. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 13, 113–128. [Google Scholar] [CrossRef]
  53. Vapnik, V.N.; Vapnik, V. Statistical Learning Theory; Wiley: New York, NY, USA, 1998; Volume 1. [Google Scholar]
  54. Hsu, C.W.; Chang, C.C.; Lin, C.J. A Practical Guide to Support Vector Classification. 2003. Available online: http://www.datascienceassn.org/sites/default/files/PracticalGuidetoSupportVectorClassification.pdf (accessed on 16 March 2022).
  55. Huang, X.; Huang, C.; Zhai, G.; Lu, X.; Xiao, G.; Sui, L.; Deng, K. Data Processing Method of Multibeam Bathymetry Based on Sparse Weighted LS-SVM Machine Algorithm. IEEE J. Ocean. Eng. 2019, 45, 1538–1551. [Google Scholar] [CrossRef]
  56. Yu, S.; Li, X.; Zhang, X.; Wang, H. The OCS-SVM: An objective-cost-sensitive SVM with sample-based misclassification cost invariance. IEEE Access 2019, 7, 118931–118942. [Google Scholar] [CrossRef]
  57. Cao, J.; Lv, G.; Chang, C.; Li, H. A feature selection based serial SVM ensemble classifier. IEEE Access 2019, 7, 144516–144523. [Google Scholar] [CrossRef]
  58. Mei, J.; Desrosiers, C.; Frasnelli, J. Machine learning for the diagnosis of parkinson’s disease: A review of literature. Front. Aging Neurosci. 2021, 13, 184. [Google Scholar] [CrossRef]
  59. Mittal, V.; Sharma, R. Machine-learning approach for classification of Parkinson disease using acoustic features. J. Reliab. Intell. Environ. 2021, 7, 233–239. [Google Scholar] [CrossRef]
  60. Celik, E.; Omurca, S.I. Improving Parkinson’s disease diagnosis with machine learning methods. In Proceedings of the 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, 24–26 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  61. Senturk, Z.K. Early diagnosis of Parkinson’s disease using machine learning algorithms. Med. Hypotheses 2020, 138, 109603. [Google Scholar] [CrossRef]
Figure 1. AP stabilometric signal and its first eight IMFs for PD (left) and healthy (right) subjects.
Figure 1. AP stabilometric signal and its first eight IMFs for PD (left) and healthy (right) subjects.
Bioengineering 09 00283 g001
Figure 2. Healthy and PD patient classification process.
Figure 2. Healthy and PD patient classification process.
Bioengineering 09 00283 g002
Figure 3. Obtained results in terms of recognition rate for each classifier using extracted/selected features from EMD, raw and EMD and raw data.
Figure 3. Obtained results in terms of recognition rate for each classifier using extracted/selected features from EMD, raw and EMD and raw data.
Bioengineering 09 00283 g003
Table 1. A description of the PD population.
Table 1. A description of the PD population.
CriteriaValue
Age (mean ± SD)67 ± 8 years
Time since diagnosis8 ± 5 years
Score (Hoenh and Yahr)2.2 ± 0.3
Weight75 ± 18 kg
Height167 ± 11 cm
Table 2. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model (EMD).
Table 2. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model (EMD).
F 1 MeasurePrecisionRecallAccuracy
( R ) ± ( std )
k-NN (%)89.9090.3190.19 89.90 ± 4.51
CART (%)78.5178.9878.20 78.71 ± 3.81
RF (%)90.0190.120.16 90 ± 3.16
SVM (%)89.9889.9791.19 91.08 ± 4.10
Table 3. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model (raw).
Table 3. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model (raw).
F 1 MeasurePrecisionRecallAccuracy
( R ) ± ( std )
k-NN (%)80.3180.1980.29 80.29 ± 5.02
CART (%)73.6773.6973.70 73.79 ± 7.09
RF (%)80.3480.3880.51 80.49 ± 6.41
SVM (%)79.6080.2179.39 79.79 ± 5.61
Table 4. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model (EMD raw).
Table 4. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model (EMD raw).
F 1 MeasurePrecisionRecallAccuracy
( R ) ± ( std )
k-NN (%)92.9593.4193.62 93.28 ± 2.52
CART (%)80.3280.3080.41 80.40 ± 4.65
RF (%)94.2193.9194.19 94.15 ± 2.78
SVM (%)92.5192.4892.8 92.6 ± 2.13
Table 5. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model.
Table 5. F 1 -measure per class, F 1 -measure, precision, recall and average of accuracy rates (R) and its standard deviation (std) for each model.
F 1 MeasurePrecisionRecallAccuracy
( R ) ± ( std )
majority vote (%)94.9094.9194.79 94.79 ± 4.09
Bayes (%)96.2996.4196.30 96.23 ± 3.19
Dempster–Sahfer (%)96.4996.6096.39 96.51 ± 3.19
Table 6. Global confusion matrix obtained with Dempster–Sahfer classifier fusion.
Table 6. Global confusion matrix obtained with Dempster–Sahfer classifier fusion.
HealthyPD Patients
TrueHealthy (%)97.322.68
classesPD patients (%)4.1195.89
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Safi, K.; Aly, W.H.F.; AlAkkoumi, M.; Kanj, H.; Ghedira, M.; Hutin, E. EMD-Based Method for Supervised Classification of Parkinson’s Disease Patients Using Balance Control Data. Bioengineering 2022, 9, 283. https://doi.org/10.3390/bioengineering9070283

AMA Style

Safi K, Aly WHF, AlAkkoumi M, Kanj H, Ghedira M, Hutin E. EMD-Based Method for Supervised Classification of Parkinson’s Disease Patients Using Balance Control Data. Bioengineering. 2022; 9(7):283. https://doi.org/10.3390/bioengineering9070283

Chicago/Turabian Style

Safi, Khaled, Wael Hosny Fouad Aly, Mouhammad AlAkkoumi, Hassan Kanj, Mouna Ghedira, and Emilie Hutin. 2022. "EMD-Based Method for Supervised Classification of Parkinson’s Disease Patients Using Balance Control Data" Bioengineering 9, no. 7: 283. https://doi.org/10.3390/bioengineering9070283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop