Next Article in Journal
A Hinge Moment Alleviation Control Strategy for Morphing Tail Aircraft Based on a Data-Driven Method
Previous Article in Journal
Adaptive Predefined Time Control for Strict-Feedback Systems with Actuator Quantization
Previous Article in Special Issue
Pneumatically Actuated Torsion Motor for the Transverse Rehabilitation of the Neck Joint
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Actively Vision-Assisted Low-Load Wearable Hand Function Mirror Rehabilitation System

by
Zheyu Chen
1,
Huanjun Wang
1,
Yubing Yang
1,
Lichao Chen
1,
Zhilong Yan
1,
Guoli Xiao
1,
Yi Sun
1,
Songsheng Zhu
1,2,
Bin Liu
1,*,
Liang Li
1,2,* and
Jianqing Li
1,2,*
1
School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211100, China
2
Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, Nanjing 211100, China
*
Authors to whom correspondence should be addressed.
Actuators 2024, 13(9), 368; https://doi.org/10.3390/act13090368
Submission received: 13 August 2024 / Revised: 10 September 2024 / Accepted: 17 September 2024 / Published: 19 September 2024
(This article belongs to the Special Issue Actuators and Robotic Devices for Rehabilitation and Assistance)

Abstract

:
The restoration of fine motor function in the hand is crucial for stroke survivors with hemiplegia to reintegrate into daily life and presents a significant challenge in post-stroke rehabilitation. Current mirror rehabilitation systems based on wearable devices require medical professionals or caregivers to assist patients in donning sensor gloves on the healthy side, thus hindering autonomous training, increasing labor costs, and imposing psychological burdens on patients. This study developed a low-load wearable hand function mirror rehabilitation robotic system based on visual gesture recognition. The system incorporates an active visual apparatus capable of adjusting its position and viewpoint autonomously, enabling the subtle monitoring of the healthy side’s gesture throughout the rehabilitation process. Consequently, patients only need to wear the device on their impaired hand to complete the mirror training, facilitating independent rehabilitation exercises. An algorithm based on hand key point gesture recognition was developed, which is capable of automatically identifying eight distinct gestures. Additionally, the system supports remote audio–video interaction during training sessions, addressing the lack of professional guidance in independent rehabilitation. A prototype of the system was constructed, a dataset for hand gesture recognition was collected, and the system’s performance as well as functionality were rigorously tested. The results indicate that the gesture recognition accuracy exceeds 90% under ten-fold cross-validation. The system enables operators to independently complete hand rehabilitation training, while the active visual system accommodates a patient’s rehabilitation needs across different postures. This study explores methods for autonomous hand function rehabilitation training, thereby offering valuable insights for future research on hand function recovery.

1. Introduction

With an aging population, the incidence of stroke increases annually. According to statistics, more than ten million people worldwide are affected by stroke each year [1,2]. Approximately 80% of stroke patients exhibit hemiplegia [3], resulting in a loss of independence that imposes a heavy burden on families and society. Hand function rehabilitation is crucial for stroke patients; only by restoring basic hand motor function can patients better perform daily activities such as eating and communication. However, because hand function recovery involves the complex coordination of fine motor skills and sensory functions, post-stroke hand function recovery is often slow, and rehabilitation training is relatively challenging [4].
Mirror therapy has positive effects on motor function rehabilitation outcomes [5]. In traditional mirror therapy, a mirror is placed along a patient’s sagittal plane. The patient moves the healthy hand while keeping the impaired hand still and hidden behind the mirror. The reflection of the healthy hand’s movement creates the illusion that the impaired hand is also moving, stimulating the brain to undergo neuroplastic changes that gradually restore hand function [6,7]. Mirror rehabilitation therapy is widely used in the field of rehabilitation, and with advancements in technology, more robot-assisted mirror rehabilitation devices have been developed to replace simple mirrors, offering more convenient and effective therapy [8].
Currently, various types of hand function mirror rehabilitation devices exist. A typical example is the Armeo®Power (Hocoma, Volketswil, Switzerland) [9], which supports learning exercises for hand opening and closing through a powered hand module. Other examples include the AMADEO® (Tyromotion, Graz, Austria) [10] and Hand of Hope(Rehab-Robotics Company, Hong Kong, China) [11]. These devices, which are guided and assisted by rehabilitation therapists during clinical use, produce favorable rehabilitation outcomes; however, since motor rehabilitation is often a long-term process, and given the medical costs and resource limitations, most patients opt for home rehabilitation following the acute phase of stroke [12]. For patients undergoing home rehabilitation, on the one hand, clinical rehabilitation devices are expensive and complex to operate [13,14]. On the other hand, the lack of professional guidance often reduces the effectiveness of home rehabilitation [15].
In response to the demand for home rehabilitation, researchers have proposed several miniaturized, low-cost hand function rehabilitation devices [16,17]. Raffaele designed a portable three-degree-of-freedom end effector haptic device (HandyBot (Kaunas, Lithuania)) [18]; however, these home rehabilitation systems continue to face several issues. First, the complexity of donning a system is significant during use. In hand function mirror rehabilitation, patients must wear a rehabilitation glove on the impaired hand and a gesture recognition device on the healthy hand. Typically, patients can use their healthy hand to assist in donning the rehabilitation glove on the impaired hand; however, due to limited function in the impaired hand, they find it difficult to place the gesture recognition device on the healthy hand. This limitation prevents patients from independently completing rehabilitation tasks. Furthermore, the lack of effective professional guidance continues to be a significant issue. In low-cost home rehabilitation, devices typically only offer limited repetitive motion functions. Neither medical professionals nor family members can determine whether a patient’s rehabilitation actions meet clinical standards or their current rehabilitation status.
Recognizing a hand’s movement intentions is relatively difficult due to the involvement of multiple fingers and joints in fine motor movements. Current methods for recognizing hand movement intentions include inertial measurement units (IMUs) [19], electromyography [20], data gloves [21], and vision-based systems [22]. IMUs capture hand movements through inertial sensors, providing convenient and real-time information acquisition; however, capturing movements of multiple fingers requires additional sensors, increasing system complexity. Electromyography systems can capture movement intention information at the muscle group level, effectively reflecting muscle activity; however, as an indirect measurement, electromyography faces challenges in representing fine hand movements, particularly those controlled by deep muscle signals that are challenging to capture and analyze with surface electrodes. Data gloves can comprehensively capture multi-degree-of-freedom movements of all hand joints; however, they are relatively expensive. Moreover, whether using IMUs, electromyography, or data gloves, these contact-based measurements require devices to be worn on a patient’s healthy hand, which complicates independent home rehabilitation.
Vision systems can directly capture images of a patient’s hand, enabling image-processing algorithms to provide direct records and analyses of the current state of the hand [23]. Vision-based recognition is a non-contact measurement process that can identify the hand gesture without the need for wearable devices [24,25]. Since the visual system captures planar information, issues such as occlusion and view-point factors arise. To address this, most existing systems fix the position of the subject’s observed hand to achieve accurate motion recognition. However, this remains a challenge for patients undergoing independent home rehabilitation.
In this paper, we present an active visual mirror hand function rehabilitation system tailored to the independent rehabilitation needs of home-bound patients. This system features a low-load wearable design, allowing hemiplegic patients to independently complete the entire rehabilitation process without wearing any data collection devices on the healthy side. We also developed vision-based remote monitoring and real-time audio–video communication capabilities, which provide a remote communication channel for rehabilitation physicians or family members to support a patient’s rehabilitation process, thereby enhancing the patient’s confidence and motivation.

2. Materials and Methods

2.1. System Design Concept

The system primarily addresses two issues: first, it aims to enable patients to use the rehabilitation device independently without external assistance; second, it seeks to establish communication between patients undergoing home rehabilitation and professional physicians and other health professional, as well as family members. Figure 1 illustrates the designed system. On the healthy side, an active visual system is employed to recognize the patient’s hand gesture. To overcome issues such as line-of-sight obstruction, a mobile platform with active adjustment capabilities for both angle and position was designed, ensuring the healthy hand remains within the optimal field of view through controlled platform movement and angle adjustments. On the impaired side, to ensure safety during the rehabilitation process, a wearable pneumatic glove is utilized to assist hand movements. The movement hand gesture recognition system on the healthy side communicates with the impaired side through a wireless module. The movement hand gesture of the healthy side drive the rehabilitation glove on the impaired side, enabling mirror rehabilitation training.

2.2. System Overall Structure

The overall architecture of the system is shown in Figure 2. On the unaffected side, the active visual image acquisition system integrates a camera with adjustable viewing angles for hand gesture recognition. This camera system captures motion images of a patient’s healthy hand, which are processed through a gesture recognition algorithm. The algorithm analyzes the images to determine the patient’s current hand gestures and generates corresponding instructions. These instructions are transmitted to the impaired side’s rehabilitation unit via a wireless module. On the affected side, considering the mature application of pneumatic hand rehabilitation systems in clinical settings and to ensure patient safety, a pneumatic glove was introduced to facilitate rehabilitation exercises. Through a self-developed rehabilitation glove control system, the system receives rehabilitation instruction information from the unaffected side and controls the inflation and deflation of the air pump to achieve hand rehabilitation movements. To facilitate remote guidance, the system uses the TCP/IP protocol for high-quality video and data transmission. This ensures the privacy and security of user data. By using this protocol, rehabilitation videos are transmitted in real time to the clinical guidance endpoint, enabling patients to engage in face-to-face remote interactions and receive guidance from rehabilitation therapists or physicians while at home.

2.3. Design of the Active Visual Human–Machine Interaction Device for the Healthy Side

The active visual human–machine interaction device for the healthy side consists of an information computation module, a position movement module, a viewing angle adjustment module, and a camera module. Its appearance and hardware structure are shown in Figure 3. The information computation module is responsible for visual image acquisition, issuing control commands, viewing angle adjustment, and network communication. The camera is integrated into the viewing angle adjustment module and the position movement module, which are responsible for adjusting the camera’s position and direction to prevent issues such as the patient’s hand moving out of view or being obstructed due to a fixed camera angle. To ensure flexible camera position movement, a four-wheel independently driven Mecanum wheel mechanism is employed, allowing the planar translation and rotational movement of the camera system and controller. For the viewing angle adjustment module, a mechanism consisting of a rotary motor and a worm gear enables the pitch adjustment of the camera’s viewing angle. The position movement module and the viewing angle adjustment module together form a 5-degree-of-freedom system, enabling comprehensive adjustments of the camera’s position and viewing angle to meet the requirements of various rehabilitation scenarios.
As shown in Figure 4, the hardware structure of the interaction device includes a general-purpose tablet computer as the information computation module with which to enhance the device’s versatility, providing both computational power and an interface for image visualization and interaction. The interaction device is equipped with six closed-loop stepper motors driven by an Arduino Mega 2560 microcontroller. Motors M1 to M4 enable omnidirectional movement on the horizontal plane via the corresponding Mecanum wheels, while motors M5 and M6 control the two-dimensional viewing angle. The microcontroller communicates with the computer via a serial port. The two-dimensional movement platform and viewing angle adjustment device communicate with a patient via a wireless transmission module, allowing for manual or autonomous adjustments.

2.4. Active Visual Viewing Angle Control Algorithm

The active visual interaction device comprises six motors, forming the position-adjustment mechanism and the camera viewing angle adjustment mechanism. For position adjustment, the Mecanum wheels in the system are configured with roller structures oriented at 45° or 135°. When rotating, the wheels experience resultant forces in these directions. By adjusting the direction and speed of different wheels, the resultant force on the position platform can be modified, enabling control of the movement platform to achieve X-axis translation, Y-axis translation, and yaw rotation. Figure 5 illustrates the motion patterns of the visual system under different motor rotations: Figure 5a,b: control of the movement platform along the Y-axis; Figure 5d,e: control of the movement platform along the X-axis; and Figure 5c,f: control of platform rotation. According to this control principle, when the active visual human–machine interaction device is positioned on a table or the ground, it can flexibly adjust to any position and rotational posture on the plane.
The mechanism for adjusting the viewing angle and posture of the active visual system consists of two rotating joints and rigid linkages. As shown in Figure 6, the position matrix   T c a m   b a s e of the camera relative to the rotating joint base can be computed using Equation (1):
T c a m = b a s e T l i n k 1 b a s e T l i n k 2 l i n k 1 T c a m l i n k 2 .
Among them, T l i n k 1   b a s e represents the transformation matrix from the rotary joint base to the pitch joint center, as expressed by Equation (5), which is a function of the rotary motor angle. T l i n k 2   l i n k 1 denotes the coordinate transformation matrix from the pitch joint end to the rotary joint center, articulated by Equation (6), which varies according to the pitch motor angle. T c a m   l i n k 2 signifies the coordinate transformation matrix from the camera to the pitch joint center.
T l i n k 1   b a s e = R z l i n k 1 θ 1 ,
T l i n k 2   l i n k 1 = R z l i n k 1 θ c a m T z l i n k 1 d T x l i n k 2 a R x l i n k 2 α .
Equations (2) and (3) are derived based on the Denavit–Hartenberg (DH) method [26]. In Equation (2), T l i n k 1   b a s e involves only a rotational transformation corresponding to the first joint angle, θ 1 , of the robot. In Equation (3), T l i n k 2   l i n k 1 is a function of the camera’s pitch angle, θ c a m , and includes three DH parameters: two translation parameters d and   a , and one rotation parameter α . The parameters d , a , α , and T c a m   l i n k 2 can be determined using standard DH parameter calibration methods [26].
Thus, this allows for the calculation of the functional relationship between the horizontal rotation motor angle, pitch motor angle, and camera viewpoint, enabling camera viewpoint control. Note that control of the camera’s horizontal rotation angle primarily depends on θ b a s e and θ c a m . Here, θ b a s e represents the change in the camera’s calibrated angle relative to the origin in the horizontal direction, influenced by the combined rotation angles of the mobile base   θ 0 and the first joint angle   θ 1   .

2.5. Construction of Patient-Side Rehabilitation Glove System

To facilitate the rehabilitation movements of the patient’s hand and ensure the safety of the rehabilitation glove, we adopted a modified approach utilizing commercial pneumatic rehabilitation gloves. Common low-cost pneumatic mirrored rehabilitation gloves available on the market comprise a control glove for the healthy side, a pneumatic driving unit, and a wearable glove. The control glove for the healthy side is connected to the pneumatic driving unit via control cables, driving the wearable pneumatic glove for rehabilitation movements. We retained the driving unit and wearable glove components from the commercial pneumatic glove and self-developed a wireless control system capable of communication with the driving unit. This system comprises a microcontroller, a wireless serial port transceiver, and relays. During operation, the wireless serial port transceiver receives wireless commands from the active visual human–machine interaction device on the healthy side, which drives relays to control the operation of the rehabilitation glove unit, thereby realizing the mirroring of patient-side movements based on visual information.

2.6. Vision-Based Gesture Recognition

To achieve accurate hand gesture recognition, this study proposes a gesture recognition algorithm utilizing hand joint angle features, employing the relative angle changes among multiple joints to comprehensively determine gestures. As illustrated in Figure 7, the algorithm comprises three primary stages: hand feature point extraction, joint angle calculation, and machine learning classification.

2.6.1. Gesture Key Point Recognition

The recognition of hand key points is fundamental to gesture recognition in this study. We employ neural networks to identify the skeletal key points of the hand accurately. MediaPipe is an open-source framework developed by Google that provides a range of pre-trained models and tools for the processing of multimedia data, including images. As shown in Figure 8, the MediaPipe framework identifies the coordinates of 20 finger joint and 1 wrist joint key points in a single image of the palm. The three-dimensional coordinates of the 21 hand points reflect the current gesture status, and analyzing these coordinate data enables stable gesture recognition.

2.6.2. Angle Feature Extraction

After obtaining the coordinates of hand key points, the angle between the two end phalanges of finger joints can be calculated using a mathematical formula. Assuming two phalanges, as directional vectors starting from the joint, there exists a vector angle Formula (4):
θ = arccos a b a b
According to the anatomical structure of the hand, the human palm has a total of 14 phalanges, with the thumb having 2 phalanges and the other four fingers having 3 phalanges each. We calculated the angles between the phalanges of each finger and the angles between the proximal phalanx of each finger and the metacarpal bone. We obtained two angle measurements for the thumb and three angle measurements for each of the other four fingers, which were used to assess the flexion and extension of each individual finger. Additionally, we calculated the angles between the metacarpal bones of adjacent fingers and the angle between the metacarpal bones of the thumb and little finger, resulting in 5 angle measurements used to assess finger abduction and adduction. In conclusion, we obtained 19 real-time angle measurements and used these data to determine the gesture outcome.

2.6.3. Definition of Target Gestures

Considering the basic requirements of hand function rehabilitation training, this study initially identifies three common rehabilitation actions—fist clenching, hand extension, and finger pointing—as targets for visual system gesture recognition. Additionally, to validate the accuracy and performance of the gesture recognition algorithm, we designed individual tests for the separate extension of each finger. Figure 9 illustrates the eight gestures proposed for recognition by the gesture recognition algorithm in this study.

2.6.4. Gesture Classification Algorithm

The described joint angle calculation method maps each gesture in the images onto a joint angle feature vector, from which a suitable classifier is constructed to determine the gesture category. Various existing classification methods were considered, such as decision trees, Bayesian methods, and AdaBoost. To identify the optimal model based on metrics such as accuracy, prediction time, and training time for our specific task scenario, we utilized the AutoGluon-Tabular tool to test the performance of 14 different algorithms. Each model was evaluated, and the best-performing classification model (XGBoost algorithm) was selected.
XGBoost (eXtreme Gradient Boosting) is an implementation of gradient boosting algorithm that boosts model performance by integrating multiple weak learners gradually. The algorithm excels in structured data modeling and classification tasks, offering various advantages including the application of regularization techniques, support for parallel processing, automatic handling of missing values, provision of feature importance evaluation, and more. Due to its exceptional performance on large-scale datasets and high flexibility, XGBoost is widely applied across various machine learning tasks. After comparing multiple classification algorithms, we ultimately chose the XGBoost algorithm to implement gesture classification.

2.6.5. Dataset Construction

To address the personalized differences in patients’ gestures, we developed specialized data collection software to create a local dataset, thereby enhancing the recognition algorithm’s robustness to these variations. We created gesture video capture software utilizing the Python(3.9), OpenCV(4.9), and PyQt5 libraries. Upon launching the software, users place their palm within the camera’s field of view, enabling the camera to recognize key points on the hand. After selecting the test subject sequence and the gesture to be recorded, clicking “Start” initiates the recording of angular features and session timing, which ceases when the set duration is reached. Upon recording the gesture, the system stores the labeled angular feature data and the video of the recorded gesture in the appropriate folder, facilitating subsequent algorithm training and evaluation.

3. Results

In the results section, we initially present the system prototype, followed by the validation of the active visual gesture recognition method’s performance, and conclude with the application testing of the designed equipment.

3.1. Prototype

The system prototype is illustrated in Figure 10. The motion intention recognition and interaction controller for the patient’s healthy side is a Microsoft Surface tablet (Surface Go 2, Intel (R) Pentium (R) CPU 4425Y, 4.00 GB). The angle adjustment is driven by a 42-stepper motor (KH2308, 0.4 N.m, 1.2 A, PFDE Company, Wenzhou, China), and the patient’s hand image is captured using the tablet’s built-in camera. To ensure system stability and low power consumption, the algorithm processes images at a periodic interval of every 3 frames and transmits control signals accordingly. The control system’s movement is managed by an Arduino Mega 2560, which drives four independent closed-loop stepper motors. Movement is achieved through a Mecanum wheel system, enabling in-place translation and rotation. The rehabilitative glove for the affected side has obtained medical device registration certification (Fourier Intelligent Company, Shanghai, China). For the affected side rehabilitation glove system, a wireless serial transceiver (nRF24L01) is used to transmit signals to the microcontroller (STM32F103). The microcontroller, in turn, controls a relay via I/O to operate the pneumatic control box, facilitating rehabilitation training.

3.2. Gesture Recognition Algorithm Performance

3.2.1. Gesture Recognition Dataset

This study constructed a local dataset with which to train the gesture recognition algorithm. Ten participants were instructed to perform the aforementioned eight gestures using their right hand, with randomized variations in hand position and orientation. Each participant’s gestures were recorded for approximately 30 s using data logging and annotation software, as illustrated in Figure 11. The system captured approximately 13 frames per second, resulting in a total of about 4000 frames. These data were utilized for the training and cross-validation of the 14 models mentioned previously. Additionally, we selected another eight participants outside of the training set using the same methodology to construct a dataset for testing the selected models.

3.2.2. Algorithm Selection

This study evaluated and selected algorithms based on three criteria: scoring, prediction time, and training time. Figure 12 presents the average scores of the 14 selected machine learning algorithms from ten-fold cross-validation on the training set.
Accuracy and model prediction time are critical factors that significantly influence the operation of mirror rehabilitation systems. In terms of test scores, each model demonstrates excellent results, averaging around 97.5%, highlighting the robustness of visual-based gesture recognition methods; however, noticeable differences in prediction times exist among the various algorithms. Furthermore, considering the impact of pre-training processes on patient experience, and noting that most models have a pre-training time range of 3 to 15 s, prediction and pre-training times were prioritized as key performance indicators. Thus, XGBoost was selected as the final predictive model, given its balanced performance across these metrics.

3.2.3. Validation of Gesture Recognition Algorithm Accuracy

We collected gesture data from eight healthy volunteers to further validate the performance of the selected gesture recognition algorithms. The system was evaluated by these volunteers under conditions where they independently wore a left-hand (LH) exoskeleton and used rehabilitation equipment, simulating a real rehabilitation scenario for an impaired hand. Participants were instructed to maintain a fixed gesture with their right hand (RH) while freely moving and adjusting its position within the camera’s field of view.
Each frame of actual usage was analyzed for gesture recognition, accounting for variability among different testers and types of gestures. The evaluation of the algorithms was conducted from two perspectives: the testers and the types of gestures performed. Figure 13a–c show the precision, recall, and F1-Score for different gestures, respectively. Figure 14a–c illustrate the gesture recognition accuracy of the algorithm for different test subjects. Figure 15 is the confusion matrix of the classification algorithm on the test set.
It can be observed that XGBoost performs well in both precision and recall, and its F1-score indicates a good balance between precision and recall. Compared to theoretical environments, there is a noticeable decrease in actual accuracy. On the one hand, the lack of depth perception in monocular vision causes densely distributed hand key points to occlude each other in images, thereby affecting recognition accuracy, particularly evident in gestures such as fist gestures (g2), which exhibit lower accuracy. On the other hand, individual differences in gesture execution and personal habits among different testers significantly influence hand key point recognition, directly impacting feature inputs and resulting in notable variations in accuracy among individuals.

3.3. Functional Testing

We invited multiple participants for verification testing, with the right hand chosen as the unaffected hand and the left hand as the stroke-affected hand. Assisted by the active visual system, participants autonomously selected comfortable postures and positions (as shown in Figure 16). They adjusted the camera’s pitch angle by manipulating the platform’s forward and backward movements and rotations, ultimately aiming to present the healthy hand’s palm as prominently as possible in the frame. Subsequently, by activating the pneumatic glove’s mirror rehabilitation mode, they engaged in simulated mirror rehabilitation: the healthy hand’s movements were mirrored by the pneumatic glove, executing corresponding flexion and extension actions. Users found that they were able to independently and safely wear the glove after multiple attempts, with the healthy hand assisting the stroke-affected hand in donning the pneumatic glove.
In addition to hand function rehabilitation, our system facilitates daily living activities for patients. Figure 17 demonstrates the remote communication function between patients and therapists. Through our equipment, therapists can remotely observe the patient’s rehabilitation training status and provide professional guidance. Moreover, by integrating with the control terminal, the rehabilitation robot system serves as both a rehabilitation tool and an intelligent assistant, aiding in daily tasks such as controlling home devices, connecting to social networks, and monitoring as well as adjusting psychological states.

4. Discussion

This work introduces a hand function rehabilitation system utilizing active vision, designed to assist stroke hemiplegia patients in easily conducting hand function training within home environment. Utilizing an innovatively designed active vision perception system and gesture recognition algorithm, patients can achieve hand function mirror rehabilitation training without the need for a sensory glove on the healthy hand. The system employs visual perception and gesture recognition technologies to detect the patient’s hand movements, creating a mirror linkage between the healthy side movements and the affected side rehabilitation device via wireless modules. Furthermore, it includes remote audio and video interaction features to offer professional guidance from rehabilitation specialists during the rehabilitation process.
Low-wear load is one of the core advantages of this system. Traditional hand function rehabilitation devices typically require patients to wear complex sensor devices, especially on the healthy hand, which poses significant challenges for independent rehabilitation. However, the system designed in this paper utilizes active vision technology remotely recognize patient gestures remotely via a visual camera and addresses key issues in visual recognition, such as adjusting angles and occlusions, by integrating a Mecanum wheel translation system and a pitch mechanism. This solution effectively reduces the need for wearable devices on the healthy hand, lowering the wear load and making rehabilitation training easier for patients. This design not only lowers the threshold for using rehabilitation equipment but also reduces the psychological burden on patients, enhancing the autonomy and continuity of rehabilitation training.
The desktop mobile robot system designed in this paper features strong flexibility and adaptability. The system achieves omnidirectional movement through Mecanum wheels and, combined with an angle adjustment module, can automatically adapt to the patient’s various postures and positions, ensuring that the healthy hand remains within the optimal camera field of view. Additionally, as our designed active vision system also possesses strong computing, perception, active observation, and screen expression capabilities, the system acts as a companion robot for the patient during non-rehabilitation periods. Such as assisting patients with voice chatting, watching videos, and even including future capabilities for simple intelligent assistance and smart home integration control as mentioned in our text. This enhances the practicality of the rehabilitation equipment, allowing the rehabilitation system to integrate truly into the patient’s home life and improve the patient’s rehabilitation level and quality of life significantly.
Gesture recognition algorithms are among the key technologies for implementing rehabilitation training in this system. Considering the reliability requirements for algorithm deployment and practical application, this paper employs a gesture recognition algorithm based on joint angle features of the hand. It first utilizes the reliable MediaPipe plugin for identifying key points and calculating joint angles of the patient’s hand then trains the recognition model with personalized data and ultimately selects the high-performance XGBoost model for gesture classification. Experimental results show that the gesture recognition algorithm has a high accuracy rate in practical applications and can effectively identify rehabilitation gestures of patients. However, the accuracy of gesture recognition is affected to some extent by hand posture and individual differences. Future work may consider incorporating more advanced deep learning models to further enhance the robustness and accuracy of gesture recognition.
Although the system designed in this paper demonstrates certain advantages in hand function rehabilitation, there are still some limitations. First, the system’s gesture recognition accuracy shows some limitations in patients with significant individual differences. Additionally, the current system’s visual perception relies mainly on a monocular camera, lacking depth information on hand movements, which may affect the accuracy of recognizing complex gestures. Future research could consider introducing binocular vision or depth cameras to enhance the three-dimensional perception of hand movements. Furthermore, incorporating adaptive learning algorithms could allow the system to adjust to individual patient differences, thereby improving the effectiveness of rehabilitation training. Finally, considering the practical needs of home rehabilitation, the system’s portability and cost need further optimization to be more widely applicable in daily life.

5. Conclusions

In our study, we present a method for controlling hand mirror rehabilitation based on active visual structures. We designed a gesture recognition system based on active vision that analyzes hand gestures through visual perception to achieve low-load mirror training and includes functionalities for remote audiovisual guidance. The system constructed in this study can further reduce the complexity of hand function rehabilitation devices, enhancing the ability of patients to engage in rehabilitation independently.

Author Contributions

Conceptualization, L.L.; methodology, Z.C.; software, Z.C. and H.W.; validation, Y.Y., L.C., and G.X.; formal analysis, Z.C.; investigation, H.W.; resources, Y.S. and Z.Y.; data curation, B.L.; writing—original draft preparation, Z.C.; writing—review and editing, L.L.; visualization, S.Z.; supervision, J.L.; project administration, B.L.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Jiangsu Province China (grant number: BK20230301), the Key Research and Development Program of Jiangsu China (grant number: BE2022160), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (grant number: 23KJB310010), the National Key Research and Development Program of China (grant numbers: 2023YFC3603603, 2022YFC2405600, 2022YFC2405304), and the Innovation and Entrepreneurship Training Program for College Students in Jiangsu Province China (grant number: 202210312098Y).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gu, Y.X.; Xu, Y.J.; Shen, Y.L.; Huang, H.Y.; Liu, T.Y.; Jin, L.; Ren, H.; Wang, J.W. A Review of Hand Function Rehabilitation Systems Based on Hand Motion Recognition Devices and Artificial Intelligence. Brain Sci. 2022, 12, 1079. [Google Scholar] [CrossRef] [PubMed]
  2. Markus, H.S. Reducing disability after stroke. Int. J. Stroke 2022, 17, 249–250. [Google Scholar] [CrossRef] [PubMed]
  3. Thrift, A.G.; Cadilhac, D.A.; Thayabaranathan, T.; Howard, G.; Howard, V.J.; Rothwell, P.M.; Donnan, G.A. Global stroke statistics. Int. J. Stroke 2014, 9, 6–18. [Google Scholar] [CrossRef] [PubMed]
  4. Hu, X.L.; Tong, K.Y.; Wei, X.J.; Rong, W.; Susanto, E.A.; Ho, S.K. The effects of post-stroke upper-limb training with an electromyography (EMG)-driven hand robot. J. Electromyogr. Kinesiol. 2013, 23, 1065–1074. [Google Scholar] [CrossRef]
  5. Yavuzer, G.; Selles, R.; Sezer, N.; Suetbeyaz, S.; Bussmann, J.B.; Koeseoglu, F.; Atay, M.B.; Stam, H.J. Mirror therapy improves hand function in subacute stroke: A randomized controlled trial. Arch. Phys. Med. Rehabil. 2008, 89, 393–398. [Google Scholar] [CrossRef]
  6. Numata, K.; Murayama, T.; Takasugi, J.; Monma, M.; Oga, M. Mirror observation of finger action enhances activity in anterior intraparietal sulcus: A functional magnetic resonance imaging study. J. Jpn. Phys. Ther. Assoc. 2013, 16, 1–6. [Google Scholar] [CrossRef]
  7. Matthys, K.; Smits, M.; Van der Geest, J.N.; Van der Lugt, A.; Seurinck, R.; Stam, H.J.; Selles, R.W. Mirror-Induced Visual Illusion of Hand Movements: A Functional Magnetic Resonance Imaging Study. Arch. Phys. Med. Rehabil. 2009, 90, 675–681. [Google Scholar] [CrossRef]
  8. Sterba, A.; Gandhi, D.; Khatter, H.; Pandian, J. Mirror Therapy in Stroke Rehabilitation: Current Perspectives. Int. J. Stroke 2020, 15, 75–85. [Google Scholar]
  9. ManovoPower. Armeo Power. Available online: https://www.hocoma.com/solutions/armeo-power/modules/ (accessed on 6 September 2024).
  10. Tyromotion. AMADEO. Available online: https://tyromotion.com/en/amadeo-for-clinics-and-therapists/ (accessed on 6 September 2024).
  11. Rehab-Robotics. Hand of Hope for Hand Rehabilitation. Available online: https://www.rehab-robotics.com.hk/hoh/RM-230-HOH3-0001-7%20HOH_brochure_eng.pdf (accessed on 6 September 2024).
  12. Forbrigger, S.; DePaul, V.G.; Davies, T.C.; Morin, E.; Hashtrudi-Zaad, K. Home-based upper limb stroke rehabilitation mechatronics: Challenges and opportunities. Biomed. Eng. Online 2023, 22, 24. [Google Scholar] [CrossRef]
  13. Yue, Z.; Zhang, X.; Wang, J. Hand Rehabilitation Robotics on Poststroke Motor Recovery. Behav. Neurol. 2017, 2017, 20. [Google Scholar] [CrossRef]
  14. Veerbeek, J.M.; van Wegen, E.; van Peppen, R.; van der Wees, P.J.; Hendriks, E.; Rietberg, M.; Kwakkel, G. What Is the Evidence for Physical Therapy Poststroke? A Systematic Review and Meta-Analysis. PLoS ONE 2014, 9, e87987. [Google Scholar] [CrossRef] [PubMed]
  15. Pierce, R.M.; Fedalei, E.A.; Kuchenbecker, K.J. A wearable device for controlling a robot gripper with fingertip contact, pressure, vibrotactile, and grip force feedback. In Proceedings of the 2014 IEEE Haptics Symposium (HAPTICS), Houston, TX, USA, 23–26 February 2014; pp. 19–25. [Google Scholar] [CrossRef]
  16. Balasubramanian, S.; Klein, J.; Burdet, E. Robot-assisted rehabilitation of hand function. Curr. Opin. Neurol. 2010, 23, 661–670. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, C.; Guo, K.; Lu, J.; Yang, H. A review on the application of intelligent control strategies for post-stroke hand rehabilitation machines. Adv. Mech. Eng. 2023, 15, 20. [Google Scholar] [CrossRef]
  18. Ranzani, R.; Albrecht, M.; Haarman, C.J.W.; Koh, E.; Devittori, G.; Held, J.P.O.; Tönis, F.; Gassert, R.; Lambercy, O. Design, characterization and preliminary usability testing of a portable robot for unsupervised therapy of hand function. Front. Mech. Eng. 2023, 8, 17. [Google Scholar] [CrossRef]
  19. Yang, Z.C.; Van Beijnum, B.J.F.; Li, B.; Yan, S.G.; Veltink, P.H. Estimation of Relative Hand-Finger Orientation Using a Small IMU Configuration. Sensors 2020, 20, 4008. [Google Scholar] [CrossRef]
  20. Feng, Y.; Zhong, M.; Wang, X.; Lu, H.; Wang, H.; Liu, P.; Vladareanu, L. Active triggering control of pneumatic rehabilitation gloves based on surface electromyography sensors. PeerJ Comput. Sci. 2021, 19, e448. [Google Scholar] [CrossRef]
  21. Chen, X.; Gong, L.; Wei, L.; Yeh, S.-C.; Xu, L.D.; Zheng, L.; Zou, Z. A Wearable Hand Rehabilitation System With Soft Gloves. IEEE Trans. Ind. Inform. 2021, 17, 943–952. [Google Scholar] [CrossRef]
  22. Hu, B.; Liu, R.; Pan, L. Hand Rehabilitation Training Finger Movement Visual Monitoring Method, Involves Collecting Hand Image, Performing Image Binarization Process, Determining Finger Feature Extraction State, and Performing Finger Action Identification Process. CN106295612-A, 4 January 2017. [Google Scholar]
  23. Heinrich, C.; Cook, M.; Langlotz, T.; Regenbrecht, H. My hands? Importance of personalised virtual hands in a neurorehabilitation scenario. Virtual Real. 2021, 25, 313–330. [Google Scholar] [CrossRef]
  24. Qiu, Y.X.; Zheng, Y.X.; Liu, Y.W.; Luo, W.X.; Du, R.W.; Liang, J.J.; Yilifate, A.; You, Y.Y.; Jiang, Y.C.; Zhang, J.H.; et al. Synergistic Immediate Cortical Activation on Mirror Visual Feedback Combined With a Soft Robotic Bilateral Hand Rehabilitation System: A Functional Near Infrared Spectroscopy Study. Front. Neurosci. 2022, 16, 10. [Google Scholar] [CrossRef]
  25. Romano, D.; Sedda, A.; Dell’Aquila, R.; Dalla Costa, D.; Beretta, G.; Maravita, A.; Bottini, G. Controlling the alien hand through the mirror box. A single case study of Alien Hand Syndrome. Neurocase 2014, 20, 307–316. [Google Scholar] [CrossRef]
  26. Deacon, G.; Harwood, A.; Holdback, J.; Maiwand, D.; Pearce, M.; Reid, I.; Street, M.; Taylor, J. The Pathfinder image-guided surgical robot. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2009, 224, 691–713. [Google Scholar] [CrossRef]
Figure 1. System concept diagram.
Figure 1. System concept diagram.
Actuators 13 00368 g001
Figure 2. System structure diagram.
Figure 2. System structure diagram.
Actuators 13 00368 g002
Figure 3. Appearance and hardware structure of the active visual human–machine interaction device.
Figure 3. Appearance and hardware structure of the active visual human–machine interaction device.
Actuators 13 00368 g003
Figure 4. Hardware structure of the active visual human–machine interaction device.
Figure 4. Hardware structure of the active visual human–machine interaction device.
Actuators 13 00368 g004
Figure 5. Position-adjustment device designed for an active vision system utilizing Mecanum wheels. (a) Move forward along the x-axis; (b) Move backward along the x-axis; (c) Move clockwise; (d) Move left along the y-axis; (e) Move right along the y-axis; (f) Move counterclockwise.
Figure 5. Position-adjustment device designed for an active vision system utilizing Mecanum wheels. (a) Move forward along the x-axis; (b) Move backward along the x-axis; (c) Move clockwise; (d) Move left along the y-axis; (e) Move right along the y-axis; (f) Move counterclockwise.
Actuators 13 00368 g005
Figure 6. Principle of active vision system perspective adjustment.
Figure 6. Principle of active vision system perspective adjustment.
Actuators 13 00368 g006
Figure 7. Gesture recognition system diagram.
Figure 7. Gesture recognition system diagram.
Actuators 13 00368 g007
Figure 8. Schematic of gesture key point recognition.
Figure 8. Schematic of gesture key point recognition.
Actuators 13 00368 g008
Figure 9. Targeted recognition of three common stroke rehabilitation gestures (fist, open palm, and finger-to-finger) and five individual finger bending gestures.
Figure 9. Targeted recognition of three common stroke rehabilitation gestures (fist, open palm, and finger-to-finger) and five individual finger bending gestures.
Actuators 13 00368 g009
Figure 10. Physical prototype of the system.
Figure 10. Physical prototype of the system.
Actuators 13 00368 g010
Figure 11. Interface of the gesture dataset recording, and annotation software designed.
Figure 11. Interface of the gesture dataset recording, and annotation software designed.
Actuators 13 00368 g011
Figure 12. Performance evaluation of various machine learning classification algorithms: (a) test set scores for different recognition algorithms; (b) prediction times of various recognition algorithms; and (c) training times of different recognition algorithms. The final selected model is indicated by the red arrow.
Figure 12. Performance evaluation of various machine learning classification algorithms: (a) test set scores for different recognition algorithms; (b) prediction times of various recognition algorithms; and (c) training times of different recognition algorithms. The final selected model is indicated by the red arrow.
Actuators 13 00368 g012
Figure 13. Testing performance of the selected algorithm on the local test set for eight gestures: (a) gesture precision; (b) gesture recall; (c) F1-score for the eight gestures.
Figure 13. Testing performance of the selected algorithm on the local test set for eight gestures: (a) gesture precision; (b) gesture recall; (c) F1-score for the eight gestures.
Actuators 13 00368 g013
Figure 14. Testing performance of the selected algorithm on the local test set for eight subjects: (a) precision in the test set; (b) recall in the test set; (c) F1-score for the eight subjects.
Figure 14. Testing performance of the selected algorithm on the local test set for eight subjects: (a) precision in the test set; (b) recall in the test set; (c) F1-score for the eight subjects.
Actuators 13 00368 g014
Figure 15. The confusion matrix of the classification algorithm on the test set.
Figure 15. The confusion matrix of the classification algorithm on the test set.
Actuators 13 00368 g015
Figure 16. System’s usability across various patient postures: (a) rehabilitation training with the patient seated; (b) rehabilitation training with the patient in a simulated lying position.
Figure 16. System’s usability across various patient postures: (a) rehabilitation training with the patient seated; (b) rehabilitation training with the patient in a simulated lying position.
Actuators 13 00368 g016
Figure 17. Demonstration of remote communication capabilities between patients and therapists.
Figure 17. Demonstration of remote communication capabilities between patients and therapists.
Actuators 13 00368 g017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Z.; Wang, H.; Yang, Y.; Chen, L.; Yan, Z.; Xiao, G.; Sun, Y.; Zhu, S.; Liu, B.; Li, L.; et al. An Actively Vision-Assisted Low-Load Wearable Hand Function Mirror Rehabilitation System. Actuators 2024, 13, 368. https://doi.org/10.3390/act13090368

AMA Style

Chen Z, Wang H, Yang Y, Chen L, Yan Z, Xiao G, Sun Y, Zhu S, Liu B, Li L, et al. An Actively Vision-Assisted Low-Load Wearable Hand Function Mirror Rehabilitation System. Actuators. 2024; 13(9):368. https://doi.org/10.3390/act13090368

Chicago/Turabian Style

Chen, Zheyu, Huanjun Wang, Yubing Yang, Lichao Chen, Zhilong Yan, Guoli Xiao, Yi Sun, Songsheng Zhu, Bin Liu, Liang Li, and et al. 2024. "An Actively Vision-Assisted Low-Load Wearable Hand Function Mirror Rehabilitation System" Actuators 13, no. 9: 368. https://doi.org/10.3390/act13090368

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop