Next Article in Journal
Synthetic Data Enhancement and Network Compression Technology of Monocular Depth Estimation for Real-Time Autonomous Driving System
Previous Article in Journal
Application of Sensory Methods to Evaluate the Effectiveness of Solutions to Reduce the Exposure to Odour Nuisance and Ammonia Emissions from the Compost Heaps
Previous Article in Special Issue
Gait Event Detection and Travel Distance Using Waist-Worn Accelerometers across a Range of Speeds: Automated Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elbow Gesture Recognition with an Array of Inductive Sensors and Machine Learning

by
Alma Abbasnia
,
Maryam Ravan
and
Reza K. Amineh
*
Department of Electrical and Computer Engineering, New York Institute of Technology, New York, NY 10023, USA
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4202; https://doi.org/10.3390/s24134202
Submission received: 29 May 2024 / Revised: 18 June 2024 / Accepted: 26 June 2024 / Published: 28 June 2024
(This article belongs to the Special Issue Combining Machine Learning and Sensors in Human Movement Biomechanics)

Abstract

:
This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human–machine interaction.

1. Introduction

The field of wearable technology has seen remarkable progress in recent years. Flexible materials, with their advantages like lightness, high stretchability, and user comfort, have enabled the development of devices that integrate seamlessly into our daily lives. This has led to the growing popularity of wearable sensing devices [1,2]. These sensors can be worn by users without significantly affecting their regular activities. Wearable sensors offer benefits such as small size, lightweight, and the ability to monitor biomedical parameters over long periods efficiently and comfortably. Wearable sensing technologies have found diverse applications, from soft robotics [3] and health tracking [4] to human–machine interfaces [5]. As the field continues to evolve, we can expect even more transformative solutions that leverage the unique capabilities of flexible materials.
A variety of wearable sensors, such as micro-electro-mechanical systems (MEMS), capacitive, electromyography (EMG), microfluidic triboelectric sensors (FMTS), strain, resistive, and inductive sensors, have been under thorough investigation for their potential in various applications for wearable devices. Each of these sensor types has its advantages and limitations. Here, we present a brief discussion of these sensing techniques, highlighting their unique features and considerations.
Advancements in sensor technology are driving the shift from rigid to flexible materials in wearable devices, improving comfort. Integrated circuits (ICs) and MEMS are key technologies enabling this transition by miniaturizing sensor chips. For instance, in [6], a human motion capture and recognition system using MEMS sensor nodes was developed to monitor human joint rehabilitation. The system uses MEMS sensor nodes and a two-stage extended Kalman filter algorithm for multi-sensor data fusion. A stationary posture calibration method is used to calculate the error matrix between the sensor node and body coordinate systems. However, MEMS sensors have limitations, including susceptibility to noise and the need for complex calibration procedures, which impact their accuracy and reliability in certain applications and make them inflexible.
Capacitive sensing operates on the principle of detecting changes in capacitance. These sensors can detect a wide range of gestures, from simple swipes and taps to more complex hand movements, making them versatile for various applications. For example, in [7], a non-contact capacitive sensing method was presented for recognizing forearm motions. The proposed capacitive sensing system records upper limb motion information from muscle contractions without direct skin contact. Also, in [8], a wearable capacitive sensing system was presented for recognizing lower limb locomotion modes, which includes sensing bands, a signal processing circuit, and a gait event detection module. One potential advantage of the non-contact capacitive sensing approach is that it avoids the direct skin contact required by other techniques, which could improve user comfort and ease of use. However, a potential limitation is that capacitive sensing can be more susceptible to external disturbances or environmental factors, which could impact the reliability and accuracy of the motion recognition.
Another common approach for detecting upper and lower limb movements is the use of EMG sensors. EMG can provide valuable information about muscle activity and movement intention. For instance, in [9], an algorithm was developed to detect the onset of upper limb reaching movements using surface electromyography (EMG) signals from multiple muscles to enable real-time control of upper limb exoskeletons. In [10], an upper limb prosthetic device was developed that can be controlled by myoelectric (EMG) signals. In [11], the authors investigated the use of surface electromyography (EMG) signals from hip muscles and residual limb muscles to detect changes in the locomotion mode for users of above-knee prosthetic limbs. While the key advantage of the EMG-based approach appears to be the ability to detect movement intention before the actual movement occurs, a potential limitation is that the system still requires surface EMG sensor placement, which may not be comfortable. Given that EMG signals are so weak, with amplitudes at the microvolt level, they are difficult to extract from noisy backgrounds [12,13]. As a result, complex processing approaches involving filtering, amplification, and other techniques are often required to obtain usable EMG signals, which increases the expense and complexity of the circuit design for EMG-based prosthetic systems.
In [14], researchers have introduced a pioneering method for hand gesture recognition using wearable ultrasound technology. There, a one-dimensional (1D) convolutional autoencoder was utilized to compress raw ultrasound data by a factor of 20 while preserving crucial amplitude features. These compressed data were then utilized to train an XGBoost classifier, achieving an impressive classification accuracy of 96%. This approach overcomes the limitations of traditional surface electromyography (sEMG) methods, which are restricted to monitoring superficial muscles and are susceptible to crosstalk between neighboring muscle fibers. However, the ultrasound method has its limitations, including imaging depth, resolution, field of view, and tissue penetration.
Another novel sensor used in human–machine interface applications is the flexible microfluidic triboelectric sensor (FMTS). In [15], researchers developed an FMTS designed to improve the reliability of self-powered human–machine interfaces in complex real-world environments. This sensor boasts a high transmittance of 82% and exhibits flexible, twistable, and conformable properties, making it ideal for skin attachment. By harnessing triboelectrification and electrostatic induction between a liquid stream, microchannel, and interdigital electrodes, the FMTS generates measurable voltage wave peaks.
Another widely utilized sensor in the realm of wearable technology is the strain sensor, which functions by altering its electrical resistance in response to mechanical deformation. In this context, the researchers in [16] developed a highly sensitive glove for monitoring finger motion, specifically designed for hand rehabilitation. This glove employs printed strain sensors on a rubber base with an intrinsic surface microstructure. These strain sensors, crafted from specialized ink and encapsulated with Ecoflex, demonstrated high strain sensitivity, a broad detection range, rapid response and recovery times, and impressive durability. The data collected by the glove were used to train a neural network model, allowing for accurate real-time monitoring and assessment of hand movements, thereby aiding in rehabilitation efforts. In [17], researchers developed an e-textile sleeve system to recognize arm gestures, aiming to overcome obstacles faced by computer vision techniques such as obfuscation and lighting conditions. The system integrates multiple ultra-sensitive graphene e-textile strain sensors along with an inertia measurement unit into a sports sleeve. The study includes the design and fabrication process of the sensors, as well as a detachable hardware implementation for reconfiguring the processing unit to other body parts. A user study with ten participants showed that the system could classify six different fundamental arm gestures with over 90% accuracy. However, the limitations of strain sensors, especially those based on e-textiles, can include sensitivity to environmental conditions, such as humidity and temperature, and limited durability over time.
As discussed above, the working principle of stretchable resistive strain sensors relies on the concept of mechanical strain [18], where physical deformation leads to changes in the electrical resistance. While these sensors can exhibit high sensitivity, their primary limitation is the direct relationship between mechanical deformation and electrical resistance. This means that even nominal operating heat can cause mechanical changes, which in turn affects the electrical resistance [19]. This characteristic makes these types of strain sensors challenging to use for extended durations, as the mechanical deformation can impact the sensor’s performance and reliability over time. They offer unique benefits but suffer specific challenges. In [20], the researchers developed a wearable gesture-sensing device for monitoring the flexion angles of the elbow and knee joints. The device incorporates a textile strain sensor made of elastic conductive webbing, which exhibits a linear relationship between its electrical resistance and the flexion angle. To calibrate the device, the researchers established an equation relating the flexion angle to the resistance measured by the textile sensor using a custom-built apparatus. However, the calibration process requires an assembled apparatus with a protractor, which may not be practical for all end-use scenarios where a quick setup is desirable. Another work using resistive sensors was presented in [21]. There, a smart textile sensor was developed based on the piezo-resistive effect, composed of a mix of conductive and dielectric threads arranged in a double bridle crochet structure, to monitor elbow flexion. This sensor was integrated into a sweater, and a deep neural network model was trained to accurately recognize the elbow joint angle from the sensor data.
Lastly, a wearable technology that has recently gained attention is the emerging inductive wearable sensors. These sensors are constructed using highly conductive threads [22,23,24], which enables the sensors to be easily incorporated into different materials and fabrics, expanding the potential applications and integration of sensing capabilities. Unlike commercially available inductive coils with specific shapes, such as rectangular or circular, sewing conductive threads allows for limitless size and shape possibilities. The geometric design parameters of these coils, including shape, size, gap between the turns, number of turns, and inner-to-outer diameter ratio, are crucial in maximizing sensor sensitivity. Therefore, finding the optimal design and their placement on the body is critical for enhancing the performance of inductive textile sensors [25,26,27]. Moreover, inductive sensors are highly sensitive and can detect subtle changes in the magnetic field or coil’s physical deformation, making them ideal for capturing detailed motion and physiological data. This level of sensitivity opens new possibilities for real-time monitoring and analysis, enabling better decision making and personalized interventions. Previous studies have shown that inductive textile sensors, utilizing conductive threads, are responsive to physical deformations, making them suitable for a variety of applications. In [28], a flexible magnetic induction-sensing device was developed that can be integrated into fabrics and attached to the human body to monitor changes in inductance during rehabilitation exercises (for arms and abdomen) and daily activities. This sensor system leverages specialized adhesives and conductive fibers to provide high sensitivity, reliability, and repeatability in detecting joint movements, offering a comfortable and non-invasive wearable solution for real-time and long-term monitoring applications. Another work that employed inductive sensors was reported in [29]. There, the performance of three different configurations of wearable inductive sensors for monitoring human joint movements was evaluated: a single planar rectangular coil, two separated planar coils connected in series, and two helical coils connected in series. Through simulations, they analyzed the sensitivity of these sensor designs in terms of the change in the resonant frequency of the tank circuits, including the sensor coils, when the elbow joint angle was varied.
In this study, we expanded our work to detect various elbow movements. We employed textile rectangular coils to detect a range of elbow gestures. To deliver accurate outcomes while being flexible and comfortable for the user to wear, conductive threads were sewn on patches and placed strategically on a compression arm sleeve in customizable sizes to capture complex elbow movements. To measure the sensors, we implemented a data-acquisition system, measuring a tank circuit formed by each sewn coil and external inductors and capacitors. The coils’ inductances changed with elbow gestures, subsequently altering the resonant frequency of the corresponding LC tank circuits. These variations were captured by the data-acquisition circuit and conveyed to the computer for processing. Employing a machine learning algorithm (MLA), the acquired responses were analyzed to recognize a large range of elbow movements.

2. Materials and Methods

In this section, we introduce the operational concept of our proposed elbow gesture recognition system. Our objective is to develop a wearable sensor that prioritizes cost-effectiveness and user convenience. To achieve this, we employed an available conductive thread from Adafruit (the stainless conductive yarn with a resistance of 1 Ω/inch and a diameter of 0.4 mm [30]) to delicately stitch coils onto patches out of a commercial self-adherent wrap material. The patches were then sewn on a commercially accessible athletic arm sleeve, as shown in Figure 1. In this study, single-layer rectangular coil designs were integrated into four different positions on the arm and around the elbow joint to detect a wide range of elbow movements and complex gestures. The parameters of the four textile-based wearable inductive sensors presented in Figure 1 can be attributed to several key design considerations for the planar spiral coils, as discussed in [31], including the number of turns n, turn width w, turn spacing s, inner diameter din, and outer diameter dout. The inner and outer diameters are typically used to define two other dependent parameters: the average diameter davg = 0.5 (dout + din) and the fill ratio ρ = (doutdin)/(dout + din). In [31], by slightly modifying Wheeler’s formula, they derived a valid expression for the inductance of the coils as L = (k1µ0n2davg)/(1 + k2ρ), where ρ is the fill ratio defined earlier, and the coefficients k1 and k2 depend on the layout, which are 2.34 and 2.75, respectively, for the rectangular coils. The ratio ρ indicates how hollow the inductor is; a small ρ means a hollow inductor, while a large ρ indicates a full inductor. Two inductors with the same average diameter but different fill ratios will have different inductance values; the fuller inductor will have a lower inductance due to its inner turns, contributing less positive mutual inductance and more negative mutual inductance. Given this information and considering practical constraints for the elbow size, we chose different parameters for each coil placed at an optimal position on the elbow to capture the motions. Moreover, smaller spacing between turns enhances interwinding magnetic coupling and reduces the occupied area. Considering sewing limitations, we chose 2 mm gap between the turns. Overall, we experimented with different positions and parameters and selected the optimal design and placement for accurate and repeatable responses for our desired elbow gestures.
Table 1 demonstrates the final parameters employed in the single-layer rectangular coils for the four different positions on the arm and around the elbow joint, as shown in Figure 1. The primary aim in determining the optimal parameter of the coils was to maximize the change in the resonant frequency of the utilized LC tank circuit (described in the next paragraph) as the corresponding coil undergoes a transition from its initial non-deformed state (position 1) to its maximum deformation (position 2). This objective emphasizes the significance of detecting substantial frequency variations in response to the coils’ deformations.
The sensing mechanism of the sewn inductive sensors is based on the measurement of the resonant frequency of an LC tank circuit. This circuit comprises the meticulously stitched coil’s inductance on the glove, which is connected in series with an external inductor of 3.3 µH. The series coils are connected in parallel to an external capacitor of 330 pF. To facilitate the execution of the data-acquisition system, we have implemented the board depicted in Figure 1. This system comprises two main components: the LDC1614EVM [32] and a NodeMCU Esp8266 developed by Espressif, Shanghai, China [33]. The LDC1614EVM, developed by Texas Instruments, Dallas, Texas, USA is an evaluation board for a 4-channel 28-bit inductance to digital converter (LDC). The NodeMCU Esp8266 acts as the interface for transmitting data from the system to a laptop via a USB cable. The acquired data are further processed using MATLAB R2022a. Figure 2 shows the hardware components of the system.
As discussed, the measurement of four sensors is accomplished using the LDC1614EVM board, while the NodeMCU Esp8266 microcontroller retrieves data from the LDC board using the I2C protocol. Subsequently, the microcontroller transfers the acquired data to the laptop via a USB cable and serial communication. This integrated approach ensures efficient and comprehensive data acquisition for the sensor system. Notably, to minimize the size of the LDC board, unnecessary factory-installed components, such as the USB port and a microcontroller unit, were removed. As a result of this modification, two pins, namely, SCL and SDA, were connected to the corresponding pins on the NodeMCU Esp8266 microcontroller using 4.7 kΩ pull-up resistors to achieve a high level of logic. Additionally, the ADDR and SD pins were connected to the ground to prevent floating.
Any physical deformation that alters the initial shape of the sewn coil on the sleeve results in changes in the total inductance and, consequently, the resonant frequency of the tank circuit. This shift in the resonant frequency serves as an indicator for monitoring elbow gestures [25].
For elbow gesture recognition, we considered 10 gestures, as shown in Figure 3. These gestures can be divided into two sets: gestures 1–5 represent basic elbow flexion and extension movements, while gestures 6–10 are similar but with an added component of hand twisting. This distinction allows us to investigate the impact of hand orientation, in addition to elbow joint kinematics, on the observed resonant frequency patterns. These 10 gestures are visually depicted, providing a clear representation of the various elbow movements and positions that were examined. Furthermore, Figure 3 shows the corresponding resonant frequency alterations observed for each of these 10 gestures, with the y-axis corresponding to the normalized resonant frequency alterations, with values ranging between 0 and 1 and the x-axis representing the number of trials. Each of the five trials corresponds to one repetition of the elbow gesture, indicating that we conducted five repetitions for each elbow gesture. Specifically, the resonant frequency changes are presented for one individual participant, across five repetitions of each gesture, and for four different sensors. In this figure, we demonstrate the consistency and reliability of their measurements. We can observe the unique resonant frequency patterns associated with different elbow gestures. Table 2 presents the frequency variations observed for 4 sensors and the 10 gestures shown earlier in Figure 3. Regarding the variation in resonant frequency alterations among different subjects, data provided in Table 3, which are the resonant frequency alternations (mean ± std) in MHz for each sensor for all gestures among all participants, suggest that the variation among all participants is greater than within a single participant. We believe that this is due to the differences in how the individuals perform the gestures and the anatomy of their elbows.
The sensors respond to shape deformations, which are combinations of bending and stretching. Thus, Table 4 presents the sensitivity of the sensors in three distinct postures: front bending, side bending, and twisting. Sensitivity is defined as the ratio of the change in resonant frequency over the change in the angle from the resting position for each gesture. The system’s response range can vary across individuals (as shown in Table 3). Thus, here, we provide sensitivity values based on the responses obtained from one individual.
The proposed system demonstrated a fast response time of 50 ms, mainly determined by the speed of data acquisition and transmission by the microcontroller.

3. Data Collection and Processing

This study’s initial phase involved data collection as individuals gradually transitioned their hands from a resting to a gesture position. In this study, the resting position is defined as maintaining the elbow joint in a 90-degree bent position, akin to gesture 5 in Figure 3. Each measurement cycle starts with participants holding the resting position for five samples, followed by a beep sound prompting the user to start a new gesture, which is then held for another five samples. This cycle repeats 5 times per participant, resulting in 50 samples per individual. The responses of four coils are measured at 200 ms intervals, and the data are transmitted to the laptop via a USB cable. As we showed earlier in Figure 3, all sensors exhibit measurable, consistent, and unique response patterns to the 10 targeted gestures.
The research enlisted the participation of 13 healthy individuals aged 20–30 in accordance with the Institutional Review Board of the New York Institute of Technology’s policies. Each participant wore the sleeve and performed 10 gestures, repeated 5 times each, resulting in 50 trials per gesture. For elbow gesture detection, variations in the resonant frequencies were analyzed using MLAs, involving a training and validation phase and a testing phase. Eight subjects were included in the training and validation phase, and the remaining five subjects were included in the testing phase. During training, features extracted from 4 coils and their labels for the 10 gestures were used. The changes in resonant frequencies caused by each gesture were concatenated into a vector and used for MLA training. To pre-process sensor data, a two-step approach was employed. Firstly, data for each gesture were calibrated by subtracting the mean of the initial 5 samples corresponding to the resting position from the subsequent 10 samples. Then, the energy of samples corresponding to each gesture was computed. The energy of all sensors was concatenated into a feature vector for each gesture. To improve machine learning performance and address low accuracy with leave-one-subject-out cross-validation (LOSO-CV), a data-augmentation strategy was adopted. This involved calculating the mean of trials from different subjects for each gesture and sensor, creating new datasets while maintaining original data characteristics. Various combinations of subjects were used to generate new datasets, significantly expanding the dataset to include 246 new datasets (1230 new sets of trials). Incorporating the data-augmentation strategy into our study offers several notable advantages. Firstly, by diversifying our dataset, we can improve the model’s ability to generalize, reducing the risk of overfitting and enhancing its performance on new, unseen data—critical for real-world applications. Secondly, the augmented dataset allows the model to learn more deeply about the nuances of elbow gestures, potentially leading to higher classification accuracy. Moreover, this approach helps mitigate biases that might arise from individual subject characteristics, thereby bolstering the robustness of our model. By thoughtfully selecting subjects and generating diverse combinations, we can avoid dataset bias, ensuring that our model’s performance is not skewed by specific subject traits.
The MLA was trained using the selected features from the four coils, along with corresponding gesture labels. To evaluate the MLA’s performance and the device’s functionality, a dataset consisting of 1270 trials for each gesture was collected, comprising 1230 augmented trials and 40 trials taken from 8 participants. Gesture recognition accuracy was initially validated using a 5-fold cross-validation (5F-CV) approach, which randomly divides the dataset into 5 folds, using 20% of trials for testing and 80% for training in each fold. This process is repeated 20 times with different random splits of the data to ensure result reliability. The average classification performance across all 20 runs is then calculated, providing a robust evaluation of the MLA’s accuracy and the system’s reliability. To facilitate cross-validation for our statistical model validation, we utilized the cvpartition function in MATLAB R2022a. This function enabled us to create a randomized partition on our dataset, which is crucial for robust cross-validation. Specifically, we configured the function with the parameters necessary for our study, such as the number of folds (5 in this case) and the stratification setting to ensure balanced class proportions across folds. By stratifying the data in this manner, we aimed to prevent potential biases stemming from imbalanced class distributions, thus improving the accuracy and generalizability of our model. Overall, this approach allowed us to effectively validate our model’s performance across different folds, enhancing the reliability of our statistical analyses.
To further evaluate the model’s generalizability and the system’s robustness, we employed LOSO-CV, using data from 253 subjects for training in each iteration while reserving data from one subject for testing. This method ensures the MLA is evaluated on unseen data from subjects, offering a comprehensive assessment of its performance across a diverse range of individuals. LOSO-CV is particularly valuable in evaluating how well the MLA can generalize to new subjects, a crucial factor in applications such as gesture recognition, where individual characteristics can significantly impact performance. Its use adds a layer of validation to the study, demonstrating the model’s consistent performance across different subjects and highlighting the sleeve’s robustness.
For classification, we employed the random forest (RF) classifier, utilizing the functionalities of the Statistics and Machine Learning Toolbox in MATLAB R2022a. RF is an ensemble MLA consisting of multiple decision tree (DT) classifiers that collaborate to predict a class for a new data sample. Each DT in the ensemble independently predicts a class, and the class with the highest number of votes across all DTs is selected as the final predicted class. This approach aggregates predictions from multiple DTs, enhancing the classifier’s generalization ability and robustness by reducing overfitting and increasing resilience to outliers. We developed RF using MATLAB’s Tree Bagger function, part of the Statistics and Machine Learning Toolbox. Our implementation involved creating a Tree Bagger object with 50 decision trees, using the training features (input variables) and their corresponding targets (labels).
We then tested the performance of the RF classifier by using a new dataset collected from five subjects, where each subject repeated each gesture five times. Therefore, the total number of test trials for each gesture was 25.

4. Results

Table 5 displays the evaluation metrics of the 5F-CV, including sensitivity, specificity, precision, and the F1 score. The F1 score, a metric particularly emphasized in our evaluation, offers a comprehensive assessment of the model’s accuracy by combining sensitivity and precision using their harmonic mean. Sensitivity (also known as recall) measures the model’s ability to correctly identify positive instances. It is calculated as the ratio of true positives (TP) to the sum of true positives and false negatives (FN):
Sensitivity i = T P i T P i + F N i
where i is number of class (i = 1, …, 10). Specificity, on the other hand, quantifies the model’s ability to avoid false positives. It is calculated as the ratio of true negatives (TN) to the sum of true negatives and false positives (FP):
Specificity i = T N i T N i + F P i
Precision, or positive predictive value, measures the model’s accuracy in predicting positive instances. It is calculated as the ratio of true positives to the sum of true positives and false positives:
Precision i = T P i T P i + F P i
The F1 score combines precision and sensitivity into a single metric, providing a balanced assessment of the model’s performance. It is calculated as the harmonic mean of precision and sensitivity:
F 1 , i = 2 × P r e c i s i o n i × S e n s i t i v i t y i P r e c i s i o n i + S e n s i t i v i t y i
Total accuracy is calculated as the percentage of all gestures correctly identified across the ten classes. These metrics collectively offer a detailed understanding of the classifier’s performance, helping to evaluate its ability to accurately classify gestures within each class and across the entire dataset.
Total   accuracy = i = 1 10   ( T P i + T N i ) i = 1 10 ( T P i + T N i + F P i + F N i )
In the context of each class, true positives (TPi) represent correctly identified gestures within that class, while true negatives (TNi) indicate gestures accurately recognized as not belonging to that class. False positives (FPi) refer to gestures incorrectly assigned to the class, and false negatives (FNi) are gestures in the class that were incorrectly classified into other classes. The total accuracy is determined by calculating the percentage of all gestures across the ten classes that were correctly identified. These metrics play a critical role in assessing the classifier’s performance, offering a detailed understanding of its ability to accurately classify gestures within each class and across the entire dataset.
The results presented in Table 5 demonstrate the impressive classification performance achieved. Notably, the sensitivity for 7 out of the 10 gestures exceeded 98%, and the overall accuracy reached 98.3%, showcasing the system’s exceptional ability to precisely recognize the diverse set of elbow gestures. The gestures with lower sensitivity values were gestures 1 and 3, which were predominantly misclassified with gestures 7 and 4, respectively. However, the sensitivity for these gestures still exceeds 95%. This higher rate of misclassification can be attributed to the inherent similarities between these two gestures, as gesture 4 involves bending at a 45-degree angle compared to gesture 3, and gesture 7 is the twisted version of gesture 1, making it challenging for the classification algorithm to reliably distinguish between the two. Despite this minor limitation, the overall results underscore the effectiveness of this approach in accurately identifying the various elbow movements, highlighting the potential of their technique for practical applications.
To evaluate the effect of the number of trials in overall classification accuracy, we evaluated the LOSO-CV performance from the case when augmenting data by adding only 140 trials from combinations of selecting 2 subjects from 8 subjects to the case when adding all 1230 trials from all combinations. Figure 4 shows the relationship between the number of trials and the accuracy of our model. As seen in Figure 4, with just 460 trials (adding combinations of selecting 2 and 3 subjects from 8 subjects), our model achieved an accuracy of 96.9%, indicating that even a relatively modest amount of augmented data can yield highly accurate results. As we increased the number of trials to 810, 1090, 1230, and 1270, we observed a corresponding improvement in performance, with the accuracy rising to 97.8%, 98.1%, 98.2%, and 98.2%, respectively. This incremental improvement underscores the benefit of additional data in enhancing model performance. However, it is important to note that beyond a certain point, specifically after 1230 trials, the accuracy gains plateaued, suggesting diminishing returns from further increasing the sample size. This observation aligns with the expectation that while more data generally lead to better model performance, there is a threshold beyond which additional trials contribute minimally to accuracy improvements.
To further evaluate the generalizability of the model, in Figure 5, the confusion matrix for all 254 subjects (1270 trials) under LOSO-CV is depicted. Column 11 of the matrix shows the precision (highlighted in green) and the false positive (FP) rate (highlighted in red) for each class. Similarly, row 11 shows the sensitivity (also highlighted in green) and the false negative (FN) rate (highlighted in red). These metrics provide detailed insights into the classifier’s performance for each class, highlighting the system’s overall accuracy and reliability.
Table 6 presents the corresponding evaluation metrics (sensitivity, specificity, precision, and F1 score). The detailed performance metrics provided in Table 6 offer a comprehensive evaluation of the classifier’s capabilities across individual gesture classes. The reported overall accuracy is 98.5%, highlighting its robust and reliable classification capabilities. In Figure 5 and Table 6, the lowest sensitivity values again belong to gestures 1 and 3, which are still above 96%. The system’s ability to reliably classify a diverse range of elbow gestures, despite potential variability in factors like hand size, shape, and individual kinematic signatures, highlights its practical utility and versatility.
Finally, to test our model on unseen subjects, we tested the trained model on the new data collected from five subjects. The confusion matrix and the corresponding evaluation metrics for all 5 test subjects (25 trials per gesture) are illustrated in Figure 6 and Table 7. In Figure 6 and Table 7, the lowest performance is attributed to gesture 5, where only 4 out of 25 trials for this gesture are misclassified, which proves the high generalizability of the developed system one more time.

5. Discussion and Conclusions

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and machine learning techniques. The proposed solution addresses some of the key limitations of existing elbow gesture recognition systems. Compared to rigid MEMS sensors, the flexible inductive sensor array integrated into the wearable sleeve provides improved comfort and user experience. Unlike resistive sensors, the inductive approach is less susceptible to wear and tear, improving reliability over long-term use. Also, in contrast to capacitive sensing, the inductive sensors are less affected by external disturbances, further enhancing the system’s performance. Furthermore, the inductive sensor-based approach avoids the complex signal processing and precise sensor placement required by EMG-based systems. This simplifies the overall system design and makes it more accessible for a wider range of applications. Notably, in contrast to the previous wearable sensing works, there is no need to attach bulky or uncomfortable sensors to the elbow. This approach not only reduces the cost of the sensors significantly but also offers greater comfort for the user.
The current system prototype requires a wired connection to a PC to record the sensor data. Since the utilized microcontroller, NodeMCU Esp8266, offers Wi-Fi capability, the acquired responses can be transferred to a PC wirelessly for adding mobility to the system if further comfort of the user is needed in the future. Moreover, the current system uses an array of four inductive sensors sewn onto the flexible sleeve. While this sensor array was able to capture a range of 10 elbow gestures with high accuracy, future work could explore incorporating a larger number of optimized sensors on the sleeve. This could enable the detection of more complex and nuanced elbow movements, as well as potentially allow the system to be extended to monitor motions of other joints, such as the shoulder or even lower limb movements.
It is worth noting that proper fit is crucial for the sensor array to accurately capture the intended elbow movements. Future iterations of the design could explore techniques to enable easy size adjustments or even personalized fabrication of the sleeves.
The rigorous testing conducted on 8 subjects, with data augmentation leveraging the dataset to 1270 trials per gesture, enabled the system to achieve remarkable evaluation accuracy of 98.3% and 98.5% using 5F-CV and LOSO-CV, respectively, as well as a high accuracy of 94% based on 25 trials per gesture from 5 subjects. The combination of the flexible inductive sensor array and the robust RF MLA appears to be a highly effective solution for intuitive human–machine interaction applications. The inductive sensors can capture a wide range of elbow movements, while the machine learning model can reliably distinguish between 10 different gestures.
The presence of a resting position does not imply that only elbow gestures starting from the resting position can be recognized. In practice, calibration is typically performed by starting from the resting position initially and only once. Then, the user can perform various gestures, and recognition of them can be implemented without the need to repeat the resting gesture anymore. However, in our study, we initiated data collection from the resting position as a standard procedure for each gesture and sample taken from participants to ensure the smoothness of data collection and to align with our data-processing method for the system.
Overall, this work presents a promising and practical solution for elbow gesture recognition, with the potential to enable more intuitive human–machine interactions in a variety of applications, from rehabilitation and assistive technologies to gaming, robotics control, and the animation industry.

Author Contributions

Conceptualization, R.K.A.; methodology, A.A., M.R. and R.K.A.; software, A.A. and M.R.; validation, A.A. and R.K.A.; formal analysis, R.K.A.; investigation, M.R.; data curation, A.A.; writing—original draft, A.A.; writing—review and editing, M.R. and R.K.A.; project administration, M.R. and R.K.A.; funding acquisition, R.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the New York Institute of Technology (protocol #: BHS-1771, approval date 11 September 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Islam, G.M.N.; Ali, M.A.; Collie, S. Textile sensors for wearable applications: A comprehensive review. J. Cellulose. 2020, 27, 6103–6131. [Google Scholar] [CrossRef]
  2. Zheng, Y.-L.; Ding, X.-R.; Yan Poon, C.-C.; Lai Lo, B.-P.; Zhang, H.; Zhou, X.-L.; Yang, G.-Z.; Zhao, N.; Zhang, Y.-T. Unobtrusive sensing and wearable devices for health informatics. IEEE Trans. Biomed. Eng. 2014, 61, 1538–1554. [Google Scholar] [PubMed]
  3. Wyatt, F. Sensing Methods for Soft Robotics. Ph.D. Thesis, Mechanical Enginnering, University of Michigan, Ann Arbor, MI, USA, 2017. [Google Scholar]
  4. Blachowicz, T.; Ehrmann, G.; Ehrmann, A. Textile-based sensors for bio signal detection and monitoring. Sensors 2021, 21, 6042. [Google Scholar] [PubMed]
  5. Herbert, R.; Kim, J.-H.; Kim, Y.S.; Lee, H.M.; Yeo, W.-H. Soft Material-Enabled, Flexible Hybrid Electronics for Medicine, Healthcare, and Human-Machine Interfaces. Materials 2018, 11, 187. [Google Scholar] [CrossRef] [PubMed]
  6. Dong, Z.; Cheng, G.; Lou, Q.; Li, D.; Gao, N.; Xu, Y.; Yu, X. Human Motion Capture Based on MEMS Sensor. J. Phys. Conf. Ser. 2023, 2456, 012047. [Google Scholar]
  7. Zheng, E.; Mai, J.; Liu, Y.; Wang, Q. Forearm Motion Recognition with Noncontact Capacitive Sensing. J. Front Neur. 2018, 27, 12–47. [Google Scholar] [CrossRef] [PubMed]
  8. Zheng, E.; Chen, B.; Wei, K.; Wang, Q. Lower Limb Wearable Capacitive Sensing and Its Applications to Recognizing Human Gaits. Sensors 2013, 13, 13334–13355. [Google Scholar] [CrossRef] [PubMed]
  9. Trigili, E.; Grazi, L.; Crea, S. Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks. J. Neuroengineering Rehabil. 2019, 16, 45. [Google Scholar] [CrossRef] [PubMed]
  10. Unanyan, N.; Belov, A.A. Design of upper limb prosthesis using real-time motion detection method based on EMG signal processing. J. Biomed. Signal. Process. Control. 2021, 70, 103062. [Google Scholar] [CrossRef]
  11. Peeraer, L.; Aeyels, B.; van der Perre, G. Development of EMG-based mode and intent recognition algorithms for a computer-controlled above-knee prosthesis. J. Biomed. Eng. 1990, 12, 178–182. [Google Scholar] [CrossRef]
  12. Bunderson, N.E.; Kuiken, T.A. Quantification of feature space changes with experience during electromyogram pattern recognition control. IEEE Trans. Neur. Sys. Reh. Eng. 2012, 20, 239–246. [Google Scholar] [CrossRef] [PubMed]
  13. Laferriere, P.; Lemaire, E.D.; Chan, A.D.C. Surface electromyographic signals using dry electrodes. IEEE Trans. Instrum. Meas. 2011, 60, 3259–3268. [Google Scholar] [CrossRef]
  14. Vostrikov, S.; Anderegg, M.; Benini, L.; Cossettini, A. Unsupervised Feature Extraction from Raw Data for Gesture Recognition with Wearable Ultra Low-Power Ultrasound. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2024. [Google Scholar] [CrossRef] [PubMed]
  15. Xiangchao, G.; Zhenqiu, G.; Liming, Z.; Haifeng, J.; Jixin, Y.; Peng, J.; Zixuan, L.; Lanyue, S.; Xuhui, S.; Zhen, W. Flexible microfluidic triboelectric sensor for gesture recognition and information encoding. Nano Energy 2023, 113, 108541. [Google Scholar]
  16. Huang, Q.; Jiang, Y.; Duan, Z.; Yuan, Z.; Wu, Y.; Peng, J.; Xu, Y.; Li, H.; He, H.; Tai, H. A Finger Motion Monitoring Glove for Hand Rehabilitation Training and Assessment Based on Gesture Recognition. IEEE Sens. J. 2023, 23, 13789–13796. [Google Scholar] [CrossRef]
  17. Yangfangzheng, L.; Yi, Z.; Cheng, S.; Rebecca, S. E-textile Sleeve with Graphene Strain Sensors for Arm Gesture Classification of Mid-Air. In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction, New York, NY, USA, 11–14 February 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar]
  18. Chen, H.; Lv, L.; Zhang, J.; Zhang, S.; Xu, P.; Li, C.; Zhang, Z.; Li, Y.; Xu, Y.; Wang, J. Enhanced Stretchable and Sensitive Strain Sensor via Controlled Strain Distribution. J. Nanomater. 2020, 10, 218. [Google Scholar] [CrossRef] [PubMed]
  19. Tan, C.; Dong, Z.; Li, Y.; Zhao, H.; Huang, X.; Zhou, Z.; Jiang, J.-W.; Long, Y.-Z.; Jiang, P.; Zhang, T.-Y.; et al. A high performance wearable strain sensor with advanced thermal management for motion monitoring. Nat. J. Commun. 2020, 11, 3530. [Google Scholar] [CrossRef] [PubMed]
  20. Shyr, T.-W.; Shie, J.-W.; Jiang, C.-H.; Li, J.-J. A Textile-Based Wearable Sensing Device Designed for Monitoring the Flexion Angle of Elbow and Knee Movements. Sensors 2014, 14, 4050–4059. [Google Scholar] [CrossRef]
  21. Maxcence, B.; Hamdi, A.; Sabine, C.; Franck, B.; Mehdi, A. SVM based approach for the assessment of elbow flexion with smart textile sensor. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 2129–2134. [Google Scholar]
  22. Tavassolian, M.J.; Cuthbert, T.; Napier, C.; Peng, J.Y.; Menon, C. Textile-based inductive soft strain Sensors for fast frequency movement and their application in wearable devices measuring multiaxial hip joint angles during running. Adv. Intell. Syst. 2020, 2, 1900165. [Google Scholar] [CrossRef]
  23. Souri, H.; Banerjee, H.; Jusufi, A.; Radacsi, N.; Stokes, A.A.; Park, I.; Sitti, M.; Amjadi, M. Wearable and Stretchable Strain Sensors: Materials, Sensing Mechanisms, and Applications. Adv. Intell. Syst. 2020, 2, 39. [Google Scholar] [CrossRef]
  24. García Patiño, A.; Menon, C. Inductive textile sensor design and validation for a wearable monitoring device. Sensors 2021, 21, 225. [Google Scholar] [CrossRef] [PubMed]
  25. Mutashar, S.A.; Hannan, M.A.; Samad, S.; Hussain, A. Analysis and optimization of spiral circular inductive coupling Link for bio-implanted applications on air and within human tissue. Sensors 2014, 14, 11522–11541. [Google Scholar] [CrossRef] [PubMed]
  26. Gong, J.; Wu, Y.; Yan, L.; Seyed, T.; Yang, X.D. Tessutivo: Contextual interactions on interactive fabrics with inductive sensing. In Proceedings of the ACM Symposium on User Interface Software and Technology, New Orleans, LA, USA, 20–23 October 2019. [Google Scholar]
  27. Mehri, S.; Ammari, A.C.; Slama, J.B.H.; Rmili, H. Geometry Optimization Approaches of Inductively Coupled Printed Spiral Coils for Remote Powering of Implantable Biomedical Sensors. J. Sens. 2016, 2016, 4869571. [Google Scholar] [CrossRef]
  28. Chen, L.; Lu, M.; Wang, Y.; Huang, Y.; Zhu, S.; Tang, J.; Zhu, C.; Liu, X.; Yin, W. Whole System Design of Wearable Magnetic Induction Sensor for Physical Rehabilitation. J. Adv. Intell. Syst. 2019, 1, 1900037. [Google Scholar] [CrossRef]
  29. Byberi, A.; Amineh, R.K.; Ravan, M. Wearable Inductive Sensing of the Arm Joint: Comparison of Three Sensing Configurations. Magnetism 2022, 2, 195–210. [Google Scholar] [CrossRef]
  30. Stainless Thin Conductive Yarn/Thick Conductive Thread. Available online: https://www.adafruit.com/product/603 (accessed on 20 May 2024).
  31. Sunderarajan, M.; Maria, H.; Stephen, B.; Thomas, L. Simple Accurate Expressions for Planar Spiral Inductances. IEEE J. Solid-State Circuits 1999, 34, 1419–1424. [Google Scholar]
  32. LDC1612EVM 2-Channel 28-Bit Inductance to Digital Converter (LDC) for Inductive Sensing. Available online: https://www.ti.com/tool/LDC1612EVM (accessed on 20 May 2024).
  33. NodeMCU ESP8266, Wi-Fi Transceiver Module and the CH340 USB Converter Chip. Available online: https://store.arduino.cc/products/nodemcu-esp8266 (accessed on 20 May 2024).
Figure 1. (a) Sleeve front side (b) Sleeve back side.
Figure 1. (a) Sleeve front side (b) Sleeve back side.
Sensors 24 04202 g001
Figure 2. Block diagram of the gesture recognition system including four coils, inductance-to-digital converter (LDC1614EVM), microcontroller (NodeMCU Esp8266), and PC.
Figure 2. Block diagram of the gesture recognition system including four coils, inductance-to-digital converter (LDC1614EVM), microcontroller (NodeMCU Esp8266), and PC.
Sensors 24 04202 g002
Figure 3. Resonant frequency alterations corresponding to 10 elbow gestures for a user and for 5 repetitions. The y-axis corresponds to the normalized resonant frequency alterations, with values ranging between 0 and 1, and the x-axis represents the number of trials. Each of the 5 trials corresponds to 1 repetition of the elbow gesture, indicating that we conducted 5 repetitions for each elbow gesture.
Figure 3. Resonant frequency alterations corresponding to 10 elbow gestures for a user and for 5 repetitions. The y-axis corresponds to the normalized resonant frequency alterations, with values ranging between 0 and 1, and the x-axis represents the number of trials. Each of the 5 trials corresponds to 1 repetition of the elbow gesture, indicating that we conducted 5 repetitions for each elbow gesture.
Sensors 24 04202 g003
Figure 4. Accuracy as a function of the number of trials.
Figure 4. Accuracy as a function of the number of trials.
Sensors 24 04202 g004
Figure 5. Aggregated confusion matrix for LOSO-CV. Column 11 shows the precision (highlighted in green) and the false positive (FP) rate (highlighted in red) for each class. Row 11 shows the sensitivity (highlighted in green) and the false negative (FN) rate (highlighted in red).
Figure 5. Aggregated confusion matrix for LOSO-CV. Column 11 shows the precision (highlighted in green) and the false positive (FP) rate (highlighted in red) for each class. Row 11 shows the sensitivity (highlighted in green) and the false negative (FN) rate (highlighted in red).
Sensors 24 04202 g005
Figure 6. Test confusion matrix using 25 trials of the 5 test subjects. Column 11 shows the precision (highlighted in green) and the false positive (FP) rate (highlighted in red) for each class. Row 11 shows the sensitivity (highlighted in green) and the false negative (FN) rate (highlighted in red).
Figure 6. Test confusion matrix using 25 trials of the 5 test subjects. Column 11 shows the precision (highlighted in green) and the false positive (FP) rate (highlighted in red) for each class. Row 11 shows the sensitivity (highlighted in green) and the false negative (FN) rate (highlighted in red).
Sensors 24 04202 g006
Table 1. Parameters used for the single-layer rectangular coil model for four sensors.
Table 1. Parameters used for the single-layer rectangular coil model for four sensors.
Sensor ParameterSensor 1Sensor 2Sensor 3Sensor 4
Width (mm)40506050
Length (mm)607070110
Gap between turns (mm)2222
Number of turns4865
Table 2. Resonant frequency alternations (mean ± std) in MHz for each sensor and all gestures for a single user.
Table 2. Resonant frequency alternations (mean ± std) in MHz for each sensor and all gestures for a single user.
GestureSensor 1Sensor 2Sensor 3Sensor 4
10.02118 ± 0.002280.01694 ± 0.000780.01186 ± 0.002680.01882 ± 0.00628
20.01079 ± 0.001730.00913 ± 0.001840.00633 ± 0.000650.00507 ± 0.00100
30.00244 ± 0.000370.01116 ± 0.001320.00193 ± 0.000580.00372 ± 0.00186
40.00243 ± 0.000290.00523 ± 0.000410.00265 ± 0.000370.00230 ± 0.00064
50.00009 ± 0.000090.00013 ± 0.000050.00014 ± 0.000110.00013 ± 0.00010
60.00107 ± 0.000950.00308 ± 0.000530.00057 ± 0.000180.00514 ± 0.00083
70.02483 ± 0.002510.03823 ± 0.003430.01911 ± 0.001460.00572 ± 0.00244
80.00607 ± 0.001200.01201 ± 0.001620.00490 ± 0.000980.01213 ± 0.00142
90.00469 ± 0.000520.00793 ± 0.001350.00155 ± 0.001060.00476 ± 0.00084
100.00460 ± 0.000470.00441 ± 0.001230.00121 ± 0.000500.00769 ± 0.00124
Table 3. Resonant frequency alternations (mean ± std) in MHz for each sensor, for all gestures, and for all users.
Table 3. Resonant frequency alternations (mean ± std) in MHz for each sensor, for all gestures, and for all users.
GestureSensor 1Sensor 2Sensor 3Sensor 4
10.02464 ± 0.003220.02463 ± 0.007270.01187 ± 0.002740.00850 ± 0.00743
20.00938 ± 0.004690.01243 ± 0.004610.00381 ± 0.001980.00339 ± 0.00188
30.00163 ± 0.000750.01148 ± 0.004310.00346 ± 0.001510.00471 ± 0.00267
40.00190 ± 0.000990.00740 ± 0.002520.00243 ± 0.001940.00252 ± 0.00092
50.00010 ± 0.000080.00009 ± 0.000060.00007 ± 0.000030.00005 ± 0.00004
60.00162 ± 0.001340.00529 ± 0.001360.00053 ± 0.000350.00532 ± 0.00281
70.02788 ± 0.003510.03504 ± 0.005420.01585 ± 0.002570.00653 ± 0.00316
80.00645 ± 0.002260.01016 ± 0.003180.00273 ± 0.001910.00995 ± 0.00262
90.00382 ± 0.001710.00753 ± 0.001650.00466 ± 0.001290.00792 ± 0.00284
100.00981 ± 0.002070.00440 ± 0.001970.00251 ± 0.000980.00747 ± 0.00276
Table 4. Sensors’ sensitivities (Hz/degree) for various postures.
Table 4. Sensors’ sensitivities (Hz/degree) for various postures.
GestureSensor 1Sensor 2Sensor 3Sensor 4
Front bending235.33188.22131.77209.11
Side bending 27.1112421.4441.33
Twisting11.8834.2263.3357.11
Table 5. Evaluation performance of RF-Classifier using 5F-CV for 4 sensors.
Table 5. Evaluation performance of RF-Classifier using 5F-CV for 4 sensors.
ClassesSensitivity (%)
(Mean ± Std)
Specificity (%)
(Mean ± Std)
Precision (%)
(Mean ± Std)
F1 (%)
(Mean ± Std)
Total
Accuracy (%)
(Mean ± Std)
195.89 ± 0.2799.75 ± 0.0297.67 ± 0.2396.77 ± 0.14
299.95 ± 0.0699.96 ± 0.0199.65 ± 0.0799.8 ± 0.041
396.39 ± 0.3499.71 ± 0.0497.39 ± 0.4196.89 ± 0.21
498.06 ± 0.3699.64 ± 0.0596.82 ± 0.4997.43 ± 0.26
599.99 ± 0.0299.99 ± 099.92 ± 099.96 ± 0.0198.30 ± 0.08
699.50 ± 0.0899.97 ± 0.0199.71 ± 0.0899.61 ± 0.06
797.76 ± 0.2499.52 ± 0.0395.73 ± 0.2596.73 ± 0.13
899.02 ± 0.1599.94 ± 0.0199.49 ± 0.1299.25 ± 0.07
998.34 ± 0.2399.84 ± 0.0398.53 ± 0.3198.43 ± 0.18
1098.06 ± 0.3299.79 ± 0.0298.13 ± 0.2398.09 ± 0.19
Table 6. Evaluation performance of RF-Classifier using LOSO-CV for 4 sensors.
Table 6. Evaluation performance of RF-Classifier using LOSO-CV for 4 sensors.
Classes Sensitivity (%)Specificity (%)Precision (%)F1 (%)Total Accuracy (%)
196.1499.7897.9997.06
2100.0099.9699.6199.80
396.8599.8398.4097.62
498.8299.7597.7498.28
5100.0099.9899.8499.92
699.5399.9799.7699.6598.51
798.1199.5195.7096.89
898.6699.9699.6099.13
998.9899.8598.6798.82
1098.1199.7797.9698.03
Table 7. Testing performance of RF-Classifier using 25 trials of the 5 test subjects.
Table 7. Testing performance of RF-Classifier using 25 trials of the 5 test subjects.
Classes Sensitivity (%)Specificity (%)Precision (%)F1 (%)Total Accuracy (%)
1100.00100.00100.00100.00
2100.00100.00100.00100.00
388.0099.1191.6789.80
492.0098.6788.4690.20
584.00100.00100.0091.30
688.0098.666788.0088.0094.00
7100.00100.00100.00100.00
896.00100.00100.0097.96
996.0099.1192.3194.12
1096.0097.7882.7688.89
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abbasnia, A.; Ravan, M.; K. Amineh, R. Elbow Gesture Recognition with an Array of Inductive Sensors and Machine Learning. Sensors 2024, 24, 4202. https://doi.org/10.3390/s24134202

AMA Style

Abbasnia A, Ravan M, K. Amineh R. Elbow Gesture Recognition with an Array of Inductive Sensors and Machine Learning. Sensors. 2024; 24(13):4202. https://doi.org/10.3390/s24134202

Chicago/Turabian Style

Abbasnia, Alma, Maryam Ravan, and Reza K. Amineh. 2024. "Elbow Gesture Recognition with an Array of Inductive Sensors and Machine Learning" Sensors 24, no. 13: 4202. https://doi.org/10.3390/s24134202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop