Next Article in Journal
Vehicle-to-Blockchain (V2B) Communication: Integrating Blockchain into V2X and IoT for Next-Generation Transportation Systems
Previous Article in Journal
Study on Parking Space Recognition Based on Improved Image Equalization and YOLOv5
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Edge Computing ML Model Implementation and IoT Paradigm towards Reliable Postoperative Rehabilitation Monitoring

by
Evanthia Faliagka
1,*,
Vasileios Skarmintzos
1,
Christos Panagiotou
1,2,
Vasileios Syrimpeis
1,3,
Christos P. Antonopoulos
1 and
Nikolaos Voros
1
1
Electrical & Computer Engineering Department, University of Peloponnese, M. Alexandrou 1, 22100 Patras, Greece
2
AVN Innovative Technology Solutions Ltd., 4652 Limassol, Cyprus
3
“Agios Andreas”, General Hospital of Patras, Kalavryton 37, 26332 Patras, Greece
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(16), 3375; https://doi.org/10.3390/electronics12163375
Submission received: 19 June 2023 / Revised: 31 July 2023 / Accepted: 3 August 2023 / Published: 8 August 2023

Abstract

:
In this work, an IoT system with edge computing capability is proposed, facilitating the postoperative surveillance of patients who have undergone knee surgery. The main objective is to reliably identify whether a set of orthopedic rehabilitation exercises is executed correctly, which is critical since it is often necessary to supervise patients during the rehabilitation period so as to avoid injuries or long recovery periods. The proposed system leverages the Internet of Things (IoT) paradigm in combination with deep learning and edge computing to classify the extension–flexion movement of one’s knee via embedded machine learning (ML) classification algorithms. The contribution of the proposed work is multilayered, as this paper proposes a system tackling the challenges at the embedded system level, algorithmic level, and user-friendliness level considering a performance evaluation, including the metrics at the power consumption level, delay level, and throughput requirement level, as well as its accuracy and reliability. Furthermore, as an outcome of this work, a dataset of labeled knee movements is freely available to the research community with no limitations. It also provides real-time movement detection with an accuracy reaching 100%, which is achieved with an ML model trained to fit a low-cost off-the-shelf Bluetooth Low Energy platform. The proposed edge computing approach allows predictions to be performed on device rather than solely relying on a Cloud service. This yields critical benefits in terms of wireless bandwidth and power conservation, drastically enhancing device autonomy while delivering reduced event detection latency. In particular, the “on device” implementation is able to yield a drastic 99.9% wireless data transfer reduction, a critical 39% prediction delay reduction, and a valuable 17% increase in the event prediction rate considering a reference period of 60 s. Finally, enhanced privacy comprises another significant benefit from the implemented edge computing ML model, as sensitive data can be processed on site and only events or predictions are shared with medical personnel.

1. Introduction

Over the last few years, multiple reports have highlighted the drastic need for a shift in caregiving technologies towards remote and personalized service provision [1,2]. This can be attributed to the fact that patients nowadays increasingly require personalized and continuous interaction with their therapists, combining more-effective treatments with cost efficiency. Additionally, it is commonly accepted that the typical health-care structures (i.e., hospitals, clinics, public organizations, etc.) are reaching critical capacity levels, leading to low-quality service provision, medical personal exhaustion with potential negative effects on patients’ health, and even life endangerment. However, it is of utmost importance that medical staff offer services of high value and successful treatments to patients in need.
Driven by such critical and pressing needs, it can be easily extracted that advancements in information and communications technology domains (ICT domains), such as embedded systems, communication technologies, and artificial intelligence/machine learning, place cyber–physical end-to-end platforms in the position of being key enablers for future healthcare systems. This is highlighted in recent attention-grabbing surveys such as [3], presenting a prominent ICT-based health-care paradigm.
One of the most prominent of such paradigms is IoT devices and platforms having emerged in health-care systems worldwide and revolutionizing the way medical staff interacts with patients. This is because, on one hand, they allow a wide range of services to be provided remotely (i.e., directly in the patient’s home environment) with critical benefits for all parties involved. On the other hand, the respective development platforms increasingly leverage edge computing, allowing the detection of highly complex events/conditions by the device itself and drastically expanding the reliability, scalability, lifetime, autonomy, and intelligence of the system. Consequently, the digitalization of the systems has provided many opportunities to transform typical health-care solutions into systems that continue to treat patients with less suffering and that also meet cost, quality, and safety challenges.
The main objective of all relative efforts is to deliver practical health-care services to patients and to provide multiparametric supervision in real time. In this paper, the main objective is to develop a novel system based on IoT technologies in combination with deep learning and edge computing to offer an effective solution considering aspects at the hardware/platform level, utilization/user-friendliness level, and algorithmic design/development/evaluation level.
One of the areas where the respective systems can have a profound positive effect is orthopedic rehabilitation. The successful outcome of such processes depends on the consistent, accurate, and meticulous repetition of specific sets of exercises by the patient for possibly extended time periods (several weeks or months); reliable, concise, and personalized monitoring and reporting to medical staff; and timely feedback and encouragement being sent back to the patient [4]. Driven by this perspective, this paper puts forward a respective IoT edge-computing-capable system applying the respective technologies to the postoperative surveillance of patients who have undergone knee surgery to identify whether the rehabilitation exercises are executed correctly.
It is noted that knee arthroscopy is among the most common surgeries performed, with more than four million arthroscopic knee procedures being performed worldwide each year, according to the American Orthopedic Society for Sports Medicine [5]. In addition, the most commonly performed joint replacement surgery is total knee arthroplasty, with over 600,000 surgeries occurring annually. By 2030, the number of knee replacements is projected to rise to over three million per year [6]. Traditional outpatient follow-up appointments provide limited opportunities for review, while approximately 20% of patients continue to suffer from postoperative pain and functional limitations, impacting recovery [7,8]. Such references highlight the need for optimized rehabilitation strategies that focus on the early identification of patients with a suboptimal outcome through the assessment of early postoperative physical function [9,10].
Taking into consideration such information and needs, we aim to apply and evaluate deep learning techniques and algorithms in integrated embedded systems in the field of orthopedic rehabilitation. In this paper, we propose a novel approach where a machine learning model is created using the Edge Impulse platform [11] and an Arduino Nano 33 sense board, taking advantage of its onboard IMU and Bluetooth components [12]. Edge Impulse is an innovative developing platform for machine learning’s application in edge devices. It empowers developers to build, deploy, and scale embedded ML applications to create and optimize solutions with real-world data. It is fully compatible with the Arduino Nano 33 BLE Sense, an off-the-shelf low-cost IoT embedded system with sensors to detect color, proximity, motion, temperature, humidity, audio, and more.
A critical challenge that is addressed concerns the training of a model that can distinguish the different degrees of the angles of a moving knee that can be deployed locally in an embedded device. By performing an extension–flexion movement with one’s knee, the model is able to classify this movement in the given degrees via embedded ML classification algorithms. The importance of this classification is apparent in the orthopedic exercises that a therapist may ask a patient to execute.
The final goal is to evaluate the edge computing capability and performance of executing the trained model on an Arduino Nano 33 board; detecting the targeted event; and conveying the information wirelessly to a backend IoT platform where the data is stored, forwarded, and presented to medical personnel. Therefore, the patient only needs to use a wearable device and to practice the “flex-extend” exercises given by the therapist without the need to be transported to a doctor’s office or a medical/rehabilitation center.
The proposed system, by enabling a machine learning algorithm running on device, yields a valuable reduction concerning event detection delays as well as bandwidth and energy requirements and, at the same time, improves privacy. Machine learning techniques require a great number of resources to work properly. The goal here is to develop appropriate algorithms that are tailored for the embedded platforms. What we assume here is that a machine learning model has been trained offline on a prebuild dataset. Usually, the data gathered by the edge device would be sent back to a Cloud platform. This platform is responsible for performing the training and finally distributing the model that has been built back to the edge device.
The rest of this paper is organized as follows: Section 2 presents the background information required to better understand the design approach concerning the knee rehabilitation problem and the deep learning methodology. Section 3 describes the architecture and the methods of the implemented system and the information workflow. Additionally, Section 3 includes the proposed methods, which include the machine learning process (including the preprocessing phase and the training phase), the live classification demo, and edge computing implementation. Section 4 provides the experimental results, and Section 5 is a discussion on the presented methods and future work aspects. Finally, Section 6 highlights the main conclusions of this paper.

2. Background Information

In this section, background information concerning knee rehabilitation importance is presented. Knee movement is analyzed, as there is a need to provide a system that monitors the correct execution of the rehabilitation exercises. Additionally, background information concerning the systems that provide rehabilitation monitoring on device is presented.

2.1. Knee Rehabilitation Introduction

The knee joint is a modified hinge joint (ginglymus). The knee’s range of motion consists of the following movements: flexion, extension, internal rotation, and external rotation.
Flexion and extension are the main movements. The flexion of the knee can be measured using a goniometer placed on the knee as is shown in Figure 1. Flexion over 120° is regarded as normal. Loss of flexion is common after local trauma, effusion, and arthritic conditions.
Range of motion is a composite movement made up of extension and flexion. The combination of the two movements creates the ability to practice the movement of the knee. Extension means that the leg is straight (0°—completely straight leg), and flexion refers to a flexed leg (at around 130°) [13]. The knee’s range of motion (ROM) is an important indication of the ability to do certain activities. Following a surgery on the knee or any distortion that may (accidentally) occur, therapists have to take into serious consideration the range of motion of the knee. In the table below (Table 1), it is apparent that, when someone needs to do any activity, the degree of the bending knee plays a significant role [13].
The extension of the knee is of great importance, as it may lead to instabilities and (dangerous) falls, and, also, if the legs cannot be completely straightened, then the quadriceps muscles are continuously activated and tire quickly.
Knee flexion is equally important because it is this exact movement of the knee that lets any activity be executed. Bending the knee is crucial if one needs to stand up from a chair or safely climb stairs [13]. The major clinical problem that patients, physiotherapists, and medical doctors have to face after a knee surgery is a loss in the range of motion of the knee due to immobilization that very quickly results in joint stiffness, especially in the knee joint [14]. Unfortunately, joint stiffness is directly connected to permanent and severe pain, even when the knee joint lacks a few degrees of motion. Moreover, when referring to knee extension, even a 5° loss in extension results in an obvious pathological phenomenon, which is lameness [14]. This result is considered extremely bad since more than four million arthroscopic knee procedures are performed worldwide each year according to the American Orthopedic Society for Sports Medicine and because the majority are conducted on young patients.
Therefore, it is of great clinical importance to provide a system that is able to detect, monitor for extended periods of time, and estimate postoperative knee range-of-motion restoration. Overall “extended duration” refers, on one hand, to continuous monitoring, which is indeed possible, and that is the reason for parameters such as easiness to use without the need of other specialized equipment or assistance. On the other hand, by “extended duration”, we refer to the fact that the user may need to keep the system at his/her premises for many weeks or even months. In such cases (which are actually the most typical cases), even if it is used only when conducting the exercises, the benefits of edge computing implementation are still critical, especially low-power operation.
The most-used exercises for knee flexion–extension physiotherapy are typical flexion–extension movements either sitting on a chair or lying on a bed. Driven by this need, in this paper, we have examined knee movements of 20°, 45°, 60°, 90°, and 120°. The aforementioned angles were chosen to be detected because 20° is the normal anatomical position of the knee; 45° is an important milestone in knee flexion; 60° shows progression over the 45° milestone and, as you can see in Table 1, is very close to achieving the activity of walking without a limp; 90° is related to the activity of ascending–descending stairs; and 120° is considered to be a level of knee flexion that will not cause any future pain, even after a knee injury or surgery [14].

2.2. Deep Learning and Its Role to Rehabilitation

Deep learning is a subcategory of machine learning that allows computers to learn directly from raw data without the need for human-engineered features [15]. Deep learning can be accomplished through the organization of a neural network, where the input data are forwarded to the so-called hidden layers, a set of processing nodes that are properly connected between them. Certain processes are performed in these units, and a final outcome is presented by the output layer. Through repetitive processes and the re-evaluation of the weights (connections) between the neurons (nodes) and with a suitable activation function (a process called backpropagation), the network succeeds at learning. Thus, any new given input can successfully be categorized in a class. Deep learning provides all the necessary tools to acquire, process, and properly use the data that come from the sensors and, in that way, builds models that satisfy special applications for medical use. Edge machine learning is a technique where smart devices can process data using machine and deep learning algorithms, reducing reliance on Cloud networks while offering performance advantages.
Many efforts have been taken to combine IMUs (electronic devices that measure and report a body’s movement and orientation using a combination of accelerometers, gyroscopes, and magnetometers) and healthcare services. However, most of them [16,17] exploit sensors and mainly accelerometers and are compared to the traditional methods for evaluating the rehabilitation process, such as patient-reported outcome measures and performance-based outcome measures. Additionally, some efforts have focused on determining angles using inertial sensors [18,19]. Even though this is a straightforward method, which is mainly due to the sensors they use (Xsens), it needs calibration and a statistical analysis using linear regression, which are major obstacles for a patient to be able to use the systems alone, and have real-time results. The majority of these efforts require a large number of IMUs to determine the joint angle, like [20], which needs seven IMUs, or [21], which needs a seventeen-sensor suit that must be calibrated in a neutral pose. According to a systematic review [22], most motion capturing applications have limited portability or require a camera or time-consuming and frequent calibrations. According to the same survey, the accuracy of the developed machine learning algorithms typically ranged from 93% to 97.3%, but none of the systems allow classification to be done on device and in real time.
Through the work in this paper, the authors aim to address a gap since few studies have dealt with the use of IMUs for remote rehabilitation [23] and since even fewer dealt with kinetic prediction using deep learning techniques with wearable sensors [17]. In a systematic review that was held in 2021 [24], it was stated that smartphone-based ROM tests have been developed for many joints, including the knee, shoulder, wrist, and lumbar joints, and that they primarily analyze motion or posture by using IMU sensors or image processing with a smartphone camera, but none of them perform the classification process on device.
Only the last two years’ efforts are reported to make use of machine-learning-based embedded devices in the field of health care [25]. The existing studies use embedded devices to propose a hearing aid device [26], to monitor vital signs (body temperature, breathing pattern, and blood oxygen saturation) [27], and to recognize human activity [28]. In all these studies, TinyML [29], a field that facilitates running machine learning on embedded edge devices, was used.
The human activity recognition (HAR) problem has previously been treated as a typical pattern recognition problem and, more specifically, a classification problem, that is, to identify the activity being performed by an individual at any given moment [30]. For this reason, most HAR solutions have been developed using artificial intelligence methods through various machine learning techniques, including shallow (e.g., support vector machine (SVM), decision tree, naive Bayes, and KNN) and deep (e.g., convolutional neural network (CNN), recurrent neural network (RNN), restricted Boltzmann machine (RBM), stacked autoencoder (SAE), deeply connected network (DFN), and deep belief network (DBN)) algorithms. NN classifiers have significantly improved the performance in supervised classification tasks related to movement detection tasks compared to convention techniques.
This is the first effort to create a rehabilitation system for flexion–extension classification using AI techniques and, additionally, to enhance the method using neural networks on computing devices to reduce computational resources and increase privacy.

3. System Architecture, Materials, and Methods

In this section, the high-level architecture of the proposed system is described as well as the data acquisition process that was followed to create the dataset and all the required equipment and materials. Furthermore, in this section, the machine learning process is described with all the required steps. To accurately and reliably detect the required event, a machine learning model was created using the Edge Impulse platform. The main objective was to train a model that could distinguish different degrees of angles of a moving knee, taking into consideration the different rotations existing in flex and extension movements as well as the fact that these are highly personalized movements as analyzed in Section 2. By performing an extension–flexion movement with one’s knee, the model would be able to classify this movement in the given degrees. The importance of this classification is apparent in orthopedic exercises that a therapist may ask a patient to execute. The final goal, leveraging the edge computing paradigm, was to import and execute the trained model on an Arduino Nano 33 embedded platform to take advantage of the respective wireless interfaces and to send the predictions made by the model running on device. Therefore, the patient would only need to have a wearable device and practice the “flex-extend” exercises without the need to be transported to a doctor’s office or a medical/rehabilitation center.

3.1. End-to-End Architecture

In Figure 2, the system’s high-level architecture is presented, indicating the main components and the main process steps. The training process involved 10 volunteer subjects, and all of them gave their informed consent for inclusion before they participated in the study. This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the ESDA Ethics Committee (ORMD 2023/6).
All subjects had different characteristics, such as age, weight, and height, so as to have a representative sample. They all performed the same movements, and the data collected were forwarded to a Cloud backend platform [31]. After first preprocessing them, all data were stored in a Cloud database and were sent to the Edge Impulse platform. The training procedure was followed as is described in Section 3.3, and the classification model was created. Following required iterations, in order to achieve the most suitable model, by adequately adjusting the parameters, the optimal trained model was transferred to the embedded device, and it was ready to be used on the device. Each time the patient made a movement, it was classified in one of the predetermined classes, and the prediction could be sent to the backend infrastructure or doctor’s mobile using Bluetooth wireless interface.

3.2. Data Acquisition and Experimentation Setup

The purpose of this experiment was to create an embedded machine learning model that was able to distinguish the different positions an adult could bend his/her leg. The degrees of the angles that we wanted the model to recognize were 20°, 45°, 60°, 90°, and 120°.
To achieve this objective, as indicated in the system architecture section, an Arduino Nano 33 BLE sense board was used to acquire the required training data from the onboard IMU sensor. This board is considered ideal for IoT solutions because it offers a variety of sensors (IMUs and wireless interfaces being the most relevant to our experiment) in small dimensions (18 × 45 mm) and because it supports models trained by the Edge Impulse platform. Using the board’s sensors, the movement of the leg was monitored, focusing on up and down movements of one’s leg from the knee and below, and respective angle that this movement formed was recorded.
In order to objectively record the ground truth measurements of the angles of interest, two instruments were used: (i) a universal goniometer and (ii) a telescopic range-of-motion knee brace with built-in goniometer. The Arduino Nano 33 was adjusted to each of these devices, and three different setups were considered.
In the first case, the board was adjusted on the edge of the goniometer (Figure 3a). The goniometer was then placed on a steady surface. Then, while holding the one brace of the goniometer steady, we started moving the other brace (with the Arduino board attached) up and down, forming, in that way, the desired angle. We tried to simulate the extension and flexion of a human knee at the angles of interest.
The second case followed the same procedure, but, this time, the goniometer was placed on the knee of an adult person (Figure 3b), thus taking into consideration all the involuntary rotations and movements in the two other axes existing when a realistic leg movement is performed. The person (starting from a completely stretched leg—0 degrees) started moving the leg up and down, forming the desired angle according to the indication of the goniometer. In more detail, the person was sitting on a table with their lower limbs hanging down. The femur was in parallel with the table, while the tibia formed a 90-degree angle with the femur. The goniometer was placed medially and laterally at the level of the knee.
The third case involved a telescopic range-of-motion knee brace (Figure 3c). This knee brace is a universal, easily adjustable, and secure support for patients recovering from knee surgery, injuries, or instabilities. The range of motion can be easily adjusted via quick clips, and it provides a simple "pull and slide" motion. The mechanism of the device has degree indications, so we could easily choose the range of motion of the leg in degrees. For example, by pulling and sliding the regulator to 45°, we had a range of motion (from full extension) of 45° degrees. After wearing this device, Arduino Nano was adjusted to the lowest part of the brace, and we started experimenting with the different angles. The dataset created with the third method was the one used for edge computing implementation, and it is freely available at https://www.kaggle.com/datasets/billskarm/knee-range-of-motion.
An image of how the raw data collected were presented in the platform of Edge Impulse is shown below (Figure 4). As the accelerometer on the development board has three axes, three different lines will be graphed out, one for each axis (accX, accY, and accZ). Accordingly, there were three lines for the gyroscope measurements (gyrX, gyrY, and gyrZ) and three more for the magnetometer measurements (magX, magY, and magZ). From Figure 4, we can understand that the accelerometer y and z values did not have a crucial role in model creation, as they had small ripples. On the contrary, the movement that we studied affected the accelerometer values in the x axis, as the movement was back and forth, and these values were critical for model creation.
For each case, each volunteer performed continuous leg extension and flexion movements at each angle, and several samples were collected lasting 10–12 s each. In the third case (knee brace), for example, 62 samples were created. These samples were almost equally distributed throughout the classes. Specifically, the 20° class had 11 samples, the 45° class had 13 samples, the 60° class had 13 samples, the 90° class had 14 samples, and the 120° class had 11 samples.
Based on the experiments conducted, the installation where the knee brace was used offered the most realistic environment for the experiment. Knee brace brought stability and efficiency. Arduino was held steadily on the brace, and the position of it remained unchanged. The two installations of the experiment, which included the goniometer, represented an easier-to-use way to try and create a model. Simply recording the movement of a goniometer without taking into consideration the mechanics of the movement of a human knee gave a false prediction. By observing the anatomy of a human leg, we can understand that a goniometer cannot express the complexity of the extension and flexion movements. The knee brace, on the other hand, gave a more-natural way to record flexion and extension, as the device that was planned to be assembled would be used by a patient that would probably have their knee brace adjusted.

3.3. Data Preprocessing, Feature Extraction, and Training

After data collection, an annotation process was followed, so a label was added to each sample so as to perform supervised classification. The raw data were sliced up in smaller windows, and signal processing blocks were used to extract features. Specifically, spectral analysis was used, which is great for analyzing repetitive motion, such as data from accelerometers.
It is well established [32,33] that the extraction of frequency domain features (i.e., the power characteristics of the signal) improves performance. These are highly relevant in the classification of repetitive motion based on accelerometer data, as FFT detects periodic signals and splits them into their harmonic components, reducing the dimensionality of data. Feature extraction is performed efficiently in microcontrollers via their DSP engines. The features extracted in the proposed system included the RMS value as the time domain feature and the peaks, frequency, and power characteristics of the signal as the frequency domain feature. This resulted in a fixed dataset of 33 features (11 per axis).
Window size was 5000 ms, and window increased by 50 ms. The window size was the size of the raw features that were used for training. The window increase was used to artificially create more features (and feed the learning block with more information).
In Figure 5, there is a sketch summarizing the role of each parameter:
After preparing the models, based on the subsequent evaluation process, the optimal model was selected for the actual classification process. In following section, focusing on the performance evaluation, the edge machine learning process is presented focusing on the scenario where the selected model was transferred to the embedded device to perform the classification process based on the edge computing paradigm, thus offering increase in performance capability and resource conservation.

3.4. Machine Learning Model Creation and Accuracy Evaluation

During classification, an algorithm learns from a given dataset and classifies new data into a number of classes or groups. In this paper, the model aimed to classify the angle of the patient’s knee into one of the following classes using neural networks: 20°, 45°, 60°, 90°, and 120°.
For each installation, the steps that are already explained were followed. Firstly, the training and the test set were formed using 80/20 train/test split ratio. The sampling of the first 8 participants’ movements created the training set of 49 samples, while the sampling of the last 2 participants’ movements created the test set of 13 samples. For the third case (knee brace), for example, 62 samples were created, and 49 of them formed the training set with the rest forming the test set. In total, 10,106 windows were created for the training set.
On the Edge Impulse platform, the model was built as a pipeline using the NN classifier, and the proper parameters were selected in order to achieve the most accurate model heuristically. A neural network consists of layers of interconnected neurons, and each connection has a weight. One such neuron in the input layer would be the height of the first peak of the X axis, e.g., AccX, and one neuron in the output layer would be 20° (one of the classes). When defining the neural network, all these connections are initialized randomly, and, thus, the neural network makes random predictions. During training, the network makes a prediction using the raw data and then alters the weights depending on the outcome. After a lot of iterations, the neural network learns and eventually becomes much better at predicting new data. The number of training cycles shows the number of iterations.
In the case of the proposed system, in order to have a well-trained model, we ran 50 cycles. Regarding the training cycles, the number 50 was determined heuristically following a high number of repetitions in the parameter space, which indicated diminishing results past 50 training cycles. The rest of the parameters chosen are shown below and in the following table (Table 2), and a graph showing the neural network is shown in Figure 6. The number of training cycles was chosen to be 50, and the learning rate was 0.0005. All the parameters were chosen after an iteration process as to increase accuracy and simultaneously avoid overfitting:
As it is indicated in Table 3, the accuracy of this model was very good (lowest accuracy: 97.61%) in all cases. In order to have a system that can deal with data it has never seen before (like a new gesture), Edge Impulse provides the Anomaly Detection block. This block was used in the system proposed so that the neural network could identify the odd values and not classify them into the predefined categories.

3.5. Edge Computing Implementation and Performance Evaluation

A critical contribution of this paper was the implementation and porting of the training model to the Arduino Nano 33 BLE board to make the prediction on device. Additionally, the code was modified in order to enable the Bluetooth Low Energy technology to send the classification result to the doctor’s mobile. With BLE enabled, the decision of the algorithm could be sent to a remote station, e.g., a mobile device, and the angle prediction was executed on device to accurately evaluate respective performance of the edge computing implementation of proposed solution.
The objectives of this exercise were as follows: (1) to evaluate the feasibility of executing a demanding ML model on off-the-shelf low-cost IoT embedded systems, (2) to undergo a performance comparison with respect to Cloud execution, and (3) to perform an evaluation of the benefits yielded by such an edge computing paradigm.
The advantages of performing computations on the edge are summarized below:
One of the critical challenges when executing ML on the Cloud stem from the great volume of data that need to be acquired from the sensors and transferred wirelessly. This is especially pronounced when low-bandwidth IoT communication technologies are utilized like the highly popular and widely integrated BLE communication protocol. In such cases, the wireless network can easily fall into high-congestion scenarios with negative effects on throughput, transmission delay, and data transfer reliability metrics. This is where edge computing paradigm comes into play and can drastically reduce all these negative effects since, instead of continuously streaming all the sensors’ raw data, the embedded platform can just send a single detected event when it is detected. This can drastically reduce the required bandwidth, effectively avoiding congestion situation as well as allowing for higher number of wireless sensors to share the same transmission medium.
Another advantage promised by edge computing is reduced event detection latency in the sense that the need for data to be transferred to the Cloud infrastructure to actually detect the event is effectively omitted. Such approach can significantly improve real-time data processing and feedback provision. This is especially critical in cases where virtual coaching is offered and where monitoring and feedback on specific exercises need to comply with strict time-constrained demands. With the on-device method, the process of sending data from the Arduino to the Edge Impulse platform that is on the Cloud using a USB cable is bypassed.
Interestingly enough, it was also indicated that edge computing implementation was able to deliver higher number of predictions at the same time period of 60 s. A thorough presentation of these results follows in Section 6.

4. Proposed System Evaluation Results

By solely relying on the test set, the model worked very well. In the first case where the Arduino was attached to the goniometer, it had an adequately high accuracy percentage of 88.45%. There was a small anomaly percentage at 20 degrees, meaning that the model could not recognize some of the samples that were taken. It is worth noting that, even in scenarios where the angles differed by less than 30 degrees (45 and 60 degrees), the model’s accuracy was very high, and it performed successful classification results.
In the second case, representing a more-realistic setup without specialized equipment, the goniometer was placed on the knee of an adult person. As this case involved realistic leg movements, the trained model achieved an overall accuracy of the model that was lower than that of the first case and was equal to 78.54%. The 20-degree scenario provided very good results, reaching an accuracy of 100%. The 60-degree scenario also yielded a good prediction percentage of 73.4%.
In the third case, a telescopic range of motion knee brace was used, combining a realistic leg movement with a more-controllable experiment due to the use of specialized equipment. After having the model designed, trained, and verified, it was deployed back to the device so as to conduct a live classification demo. For this demo, 2 different people followed the same process that was followed by the 10 people who created the training set and that was presented in Section 3.3.
In Table 4, the confusion matrix of the third case’s classification is presented, showcasing the performance of the proposed method. In Figure 7, a graph with the features created is presented, where the features of each class (20°, 45°, 60°, 90°, and 120°) are shown to be clearly clustered, which is another piece of evidence of the well-performing model that was created.
For reasons of completeness, in Table 5, some of the samples of the test set are shown. For each sample, the degrees predicted are shown as well as the duration of the movement, the anomaly percentage, and the accuracy.
In order to offer an objective performance evaluation, the same users executed the same scenario in both cases (Cloud and on device), and the accuracy results were quite similar (Table 6), indicating that the embedded platform was able to reliably execute the trained model.
In order to evaluate the on-device method in terms of performance enhancement and resource conservation, three metrics were evaluated: the wireless communication bandwidth and corresponding power conservation, the event detection time reduction, and the event prediction rate increase.
For each classification, a batch of 60 kB had to be transferred to the Cloud server. This data transfer was achieved through a USB cable. Using the sensors of the embedded device, before the execution of the algorithm, sampling was completed, and the data were used right away with no need for any sample transfer. Considering the Cloud method, the whole batch of measurements was transferred from the Arduino to the Cloud, which was 60 kb, while in the on-device method, only the result of the prediction was transferred via BLE, which was 2–3 bytes. Therefore, there was a bandwidth reduction of 99.9%. If it was considered that no event was identified (e.g., because the user was resting), then, as it can easily be understood, the bandwidth conservation could reach 100%.
In the Cloud method, there was the extra step of uploading the sample data, which required approximately 13 s, while the "on device" algorithm needed about 8 s. Therefore, the on-device method achieved a 39% time reduction in the prediction process.
In the “on device” case, the model made a prediction every 6 s (5 s for sampling and about 1 s for making a prediction). If we add up the 2 s of waiting time before sampling, every iteration of the model took, in total, 8 s. In one minute (60 s), we had 60/8 = 7.5, so there were basically seven predictions/min. In the other case, we measured six predictions/min. Therefore, there was a 17% increase in the event prediction rate when the “on device” method was used, which is explained due to the inevitable overhead of raw data transmission in the Cloud case.

5. Discussion and Future Work Challenges

Following a complete experimentation set of the three different indicated setups where the embedded system board was used to acquire the required data and also gather the model testing results, in this section, we discuss the main observations extracted and challenges encountered.
As stated in the introduction, the main driving force behind this work is to propose a system suitable for monitoring rehabilitation at home that can be continuously extended and improved. Therefore, all the requirements in relation to the most common and high-impact exercises required by real patients, the level of dynamic movement, and the angles typically monitored were imposed by the medical experts participating in our team using the respective background information that they provided. Additionally, the issues of practicality, a low cost, and easiness to use are also critical to our objective. Therefore, three different setups were discussed, varying (in some cases orthogonally) in terms of user friendliness; the requirement of additional equipment; and, of course, performance. The respective concerns were further underpinned by the requirement of having only one IMU component, which, on one hand, reduced the cost while increasing practicality but, on the other hand, proved to be sufficient for the specific rehabilitation exercises. Of course, in future work, considering more-complex movements and utilizing a higher number of IMUs could be required.
In the scenarios with the goniometer, the overall accuracy and F1 were lower than the ones in the third case (Table 3). Therefore, although this setup is much easier to apply, it requires no specialized medical equipment, and an acceptable performance is offered for a range of angles, a notable deviation in the optimal accuracy was identified.
Another observation is that the model created considering the third setup scenarios offered an excellent ability to classify the different angles that we investigated. As depicted in Table 3, the accuracy of this method was much better than the other methods, reaching up to 100% with a very high F1_score (close to 1.00). Equally important, the training of the specific model was much faster, requiring only 19 training cycles compared to the requirement of 50 in the previous setup scenarios.
Another worth-noting result is that, in the on-device method, the first classification result of each movement lacked in accuracy. In the example explained here, the prediction was for 20 degrees, and the neural network algorithm was used as in all the experiments mentioned. Figure 8 shows that, in the first classification process, the model struggled to identify the movement, as it predicted a movement of 20° with an accuracy of 54.6% and a movement of 45° with an accuracy of 45.3%. The model gave the correct classification, but the accuracy of the model was fairly low. After the first classification process and while the movement of the knee continued, the model gave the correct classification, while the accuracy percentages improved.
In the first setup case where the Arduino Nano 33 was attached to the goniometer, we can see that the uncertainty was higher than that of the two other cases. This observation can be attributed to the fact that, besides the desired "up-down" movement along the vertical axis, the goniometer also moved slightly along the horizontal axis. The way the goniometer was assembled gave it flexibility. Specifically, the goniometer’s moving brace was connected to the circular part of the goniometer with a small metallic bolt. These small vibrations of the goniometer along the horizontal axis made the acquired training set samples more complex to distinguish, as it was less stable than the knee brace.
Focusing on the contribution regarding the edge computing implementation, a major benefit is the fact that only detected events need to be wirelessly transmitted instead of continuously streaming all the acquired raw data. This allows the system to deactivate the wireless interface for extended periods of time, contributing to power conservation. Considering that the wireless interfaces on such low-power IoT devices comprise one of the most power-hungry components, it is easily extracted that such capability can drastically increase the lifetime and autonomy of the respective devices, being a possibly critical feature when commercialization is considered.
Additionally, a qualitative advantage concerns user privacy and the safety of the data, comprising an equally important advantage of edge computing, especially in the health domain. Edge devices have the ability to discard information that needs to be kept private, as only certain event based information is specially encoded.
In general, the vibrations and position of Arduino play an important role in the results of the model. We can infer that even small changes in the direction and position of Arduino or any vibrations could give different results or bring uncertainty to the models. That being the primary source of error in our measurements, respective performance seems to be relatively unaffected with respect to the classification accuracy, while continuously extending the training dataset further increases the classification reliability and accuracy, even considering more-complex exercises.
As for future work, we plan to expand the training dataset, including more people and more arthrosis movements, and to address the challenges mentioned before. Another domain where future improvements and insights will be pursued concerns the selected modalities. In this work, IMU was selected, as it probably had the lowest cost, had the lowest power, was miniaturized, was widely used, and was integrated into IoT platforms as well as less-obtrusive suitable modalities. However, depending on the targeted exercises, the deployment site and setup as well as the level of allowed obtrusiveness of other modalities are indeed attracting high interest, and we plan to compare the results of this study with other methods that are based on camera computer vision, electrogoniometers, and EMGs. Finally, an integrated platform is planned to be implemented that will offer a complete knee rehabilitation plan to patients. This is to emphasize that the proposed system is aimed towards postsurgery patients. Consequently, with the right classification process, as presented here, we expect to have a similar performance, which will drastically increase the platform’s impact and added value.

6. Conclusions

In this paper, we have illustrated an edge-computing-capable system in detail, described the algorithms used, and shown the basic scenario of a high-impact health application’s usage. The proposed platform is based on an integrated embedded IoT platform and Cloud service that facilitates automatic knee movement detection based on both classic machine learning techniques and edge classifications, which is critical in the context of postoperative rehabilitation monitoring. All the system components and processes are analyzed in detail, validating the importance as well as the feasibility of the solution proposed. A number of tests were performed to evaluate the developed techniques both qualitatively and quantitatively in order to test their stability and their accuracy, which reached 100%. After concluding which was the most accurate model, respective tests were undertaken to compare on-device prediction and the classic machine learning process with great results, as, with edge classification, 99,9% less data were transferred with a 39% time reduction and a 17% increase in event prediction being achieved. Finally, the main challenges and future prospects of this work are discussed, while the dataset of the labeled knee movements is freely available at https://www.kaggle.com/datasets/billskarm/knee-range-of-motion to the research community, comprising a critical contribution of the work conducted.

Author Contributions

Methodology, E.F. and V.S. (Vasileios Syrimpeis); Software, E.F. and V.S. (Vasileios Skarmintzos); Writing—original draft, E.F., V.S. (Vasileios Skarmintzos), C.P., V.S. (Vasileios Syrimpeis), C.P.A. and N.V.; Writing—review & editing, C.P.A. and N.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work received funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No 872614—SMART4ALL: self-sustained cross-border customized cyber–physical system experiments for capacity building among European stakeholders.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the ESDA Ethics Committee (ORMD 2023/6).

Informed Consent Statement

All 10 volunteer subjects gave their informed consent for inclusion before participating in this study.

Data Availability Statement

While the dataset of the labeled knee movements is freely available at https://www.kaggle.com/datasets/billskarm/knee-range-of-motion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Spekowius, G.; Wendler, T. (Eds.) Advances in Healthcare Technology: Shaping the Future of Medical Care; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 6. [Google Scholar]
  2. Czaja, S.; Beach, S.; Charness, N.; Schulz, R. Older adults and the adoption of healthcare technology: Opportunities and challenges. Technol. Act. Aging 2013, 9, 27–46. [Google Scholar]
  3. Sundaravadivel, P.; Kougianos, E.; Mohanty, S.P.; Ganapathiraju, M.K. Everything you wanted to know about smart health care: Evaluating the different technologies and components of the internet of things for better health. IEEE Consum. Electron. Mag. 2017, 7, 18–28. [Google Scholar]
  4. Weber, M.D.; Woodall, W.R. Knee rehabilitation. In Physical Rehabilitation of the Injured Athlete; Expert Consult-Online and Print; Elsevier Health Sciences: Amsterdam, The Netherlands, 2012; Volume 377. [Google Scholar]
  5. American Orthopaedic Society for Sports Medicine. Available online: https://www.sportsmed.org (accessed on 29 March 2023).
  6. Inacio, M.; Paxton, E.W.; Graves, S.E.; Namba, R.S.; Nemes, S. Projected increase in total knee arthroplasty in the United States—An alternative projection model. Osteoarthr. Cartil. 2017, 25, 1797–1803. [Google Scholar] [CrossRef] [Green Version]
  7. Bell, K.M.; Onyeukwu, C.; Smith, C.N.; Oh, A.; Devito Dabbs, A.; Piva, S.R.; Popchak, A.J.; Lynch, A.D.; Irrgang, J.J.; McClincy, M.P. A portable system for remote rehabilitation following a total knee replacement: A pilot randomized controlled clinical study. Sensors 2020, 20, 6118. [Google Scholar]
  8. Burland, J.P.; Outerleys, J.B.; Lattermann, C.; Davis, I.S. Reliability of wearable sensors to assess impact metrics during sport-specific tasks. J. Sports Sci. 2021, 39, 406–411. [Google Scholar] [PubMed]
  9. Luna, I.E.; Kehlet, H.; Peterson, B.; Wede, H.R.; Hoevsgaard, S.J.; Aasvang, E.K. Early patient-reported outcomes versus objective function after total hip and knee arthroplasty. Bone Jt. J. 2017, 99, 1167–1175. [Google Scholar]
  10. Yi, C.; Jiang, F.; Bhuiyan, M.Z.A.; Yang, C.; Gao, X.; Guo, H.; Ma, J.; Su, S. Smart healthcare-oriented online prediction of lower-limb kinematics and kinetics based on data-driven neural signal decoding. Future Gener. Comput. Syst. 2021, 114, 96–105. [Google Scholar]
  11. Edge Impulse. Available online: https://www.edgeimpulse.com/ (accessed on 21 February 2023).
  12. Arduino cc. Available online: https://docs.arduino.cc/hardware/nano-33-ble-sense (accessed on 21 February 2023).
  13. Perry, J.; Burnfield, J.M. Gait analysis: Normal and pathological function. J. Sports Sci. Med. 2010, 9, 353. [Google Scholar]
  14. Karim, A.; Pulido, L.; Incavo, S. Does accelerated physical therapy after elective primary hip and knee arthroplasty facilitate early discharge. Am. J. Orthop. 2016, 45, E337–E342. [Google Scholar]
  15. El Naqa, I.; Murphy, M.J. What are machine and deep learning? In Machine and Deep Learning in Oncology, Medical Physics and Radiology; Springer International Publishing: Cham, Switzerland, 2022; pp. 3–15. [Google Scholar]
  16. Poitras, I.; Dupuis, F.; Bielmann, M.; Campeau-Lecours, A.; Mercier, C.; Bouyer, L.J.; Roy, J.S. Validity and reliability of wearable sensors for joint angle estimation: A systematic review. Sensors 2019, 19, 1555. [Google Scholar]
  17. Dejnabadi, H.; Jolles, B.M.; Aminian, K. A new approach to accurate measurement of uniaxial joint angles based on a combination of accelerometers and gyroscopes. IEEE Trans. Biomed. Eng. 2005, 52, 1478–1484. [Google Scholar] [PubMed]
  18. Oliveira, N.; Park, J.; Barrance, P. Using inertial measurement unit sensor single axis rotation angles for knee and hip flexion angle calculations during gait. IEEE J. Transl. Eng. Health Med. 2022, 11, 80–86. [Google Scholar] [PubMed]
  19. Seel, T.; Raisch, J.; Schauer, T. IMU-based joint angle measurement for gait analysis. Sensors 2014, 14, 6891–6909. [Google Scholar]
  20. Lebleu, J.; Gosseye, T.; Detrembleur, C.; Mahaudens, P.; Cartiaux, O.; Penta, M. Lower limb kinematics using inertial sensors during locomotion: Accuracy and reproducibility of joint angle calculations with different sensor-to-segment calibrations. Sensors 2020, 20, 715. [Google Scholar]
  21. Kim, D.; Kwon, J.; Han, S.; Park, Y.L.; Jo, S. Deep full-body motion network for a soft wearable motion sensing suit. IEEE/ASME Trans. Mechatron. 2018, 24, 56–66. [Google Scholar]
  22. Menolotto, M.; Komaris, D.S.; Tedesco, S.; O’Flynn, B.; Walsh, M. Motion capture technology in industrial applications: A systematic review. Sensors 2020, 20, 5687. [Google Scholar] [PubMed]
  23. Godfrey, A.; Conway, R.; Meagher, D.; ÓLaighin, G. Direct measurement of human movement by accelerometry. Med. Eng. Phys. 2008, 30, 1364–1386. [Google Scholar]
  24. Moral-Munoz, J.A.; Zhang, W.; Cobo, M.J.; Herrera-Viedma, E.; Kaber, D.B. Smartphone-based systems for physical rehabilitation applications: A systematic review. Assist. Technol. 2021, 33, 223–236. [Google Scholar]
  25. Tsoukas, V.; Boumpa, E.; Giannakas, G.; Kakarountas, A. A review of machine learning and tinyml in healthcare. In Proceedings of the 25th Pan-Hellenic Conference on Informatics, Volos, Greece, 26–28 November 2021; pp. 69–73. [Google Scholar]
  26. Fedorov, I.; Stamenovic, M.; Jensen, C.; Yang, L.C.; Mandell, A.; Gan, Y.; Mattina, M.; Whatmough, P.N. TinyLSTMs: Efficient neural speech enhancement for hearing aids. arXiv 2020, arXiv:2005.11138. [Google Scholar]
  27. Fyntanidou, B.; Zouka, M.; Apostolopoulou, A.; Bamidis, P.D.; Billis, A.; Mitsopoulos, K.; Angelidis, P.; Fourlis, A. IoT-based smart triage of COVID-19 suspicious cases in the Emergency Department. In Proceedings of the 2020 IEEE Globecom Workshops, Taipei, Taiwan, 8–10 December 2020; pp. 1–6. [Google Scholar]
  28. Gupta, S.; Jain, S.; Roy, B.; Deb, A. A TinyML Approach to Human Activity Recognition. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2022; Volume 2273, p. 012025. [Google Scholar]
  29. Warden, P.; Situnayake, D. Tinyml: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers; O’Reilly Media: Sevastopol, CA, USA, 2019. [Google Scholar]
  30. Sousa Lima, W.; Souto, E.; El-Khatib, K.; Jalali, R.; Gama, J. Human activity recognition using inertial sensors in a smartphone: An overview. Sensors 2019, 19, 3213. [Google Scholar]
  31. Antonopoulos, C.P.; Antonopoulos, K.; Panagiotou, C.; Voros, N.S. Tackling Critical Challenges towards Efficient CyberPhysical Components & Services Interconnection: The ATLAS CPS Platform Approach. J. Signal Process. Syst. 2019, 91, 1273–1281. [Google Scholar]
  32. He, Z.; Jin, L.; Zhen, L.; Huang, J. Gesture recognition based on 3D accelerometer for cell phones interaction. In Proceedings of the APCCAS 2008—2008 IEEE Asia Pacific Conference on Circuits and Systems, Macao, China, 30 November–3 December 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 217–220. [Google Scholar]
  33. Erdaş, Ç.B.; Atasoy, I.; Açıcı, K.; Oğul, H. Integrating features for accelerometer-based activity recognition. Procedia Comput. Sci. 2016, 98, 522–527. [Google Scholar] [CrossRef] [Green Version]
  34. Available online: https://docs.edgeimpulse.com/docs/edge-impulse-studio/impulse-design (accessed on 21 July 2023).
Figure 1. The knee joint’s range-of-motion milestones.
Figure 1. The knee joint’s range-of-motion milestones.
Electronics 12 03375 g001
Figure 2. System’s architecture and main workflow.
Figure 2. System’s architecture and main workflow.
Electronics 12 03375 g002
Figure 3. (a) Goniometer with Arduino attached, (b) goniometer on knee, and (c) Knee brace.
Figure 3. (a) Goniometer with Arduino attached, (b) goniometer on knee, and (c) Knee brace.
Electronics 12 03375 g003
Figure 4. Samples of a training set indicating positive and negative values at Y axis.
Figure 4. Samples of a training set indicating positive and negative values at Y axis.
Electronics 12 03375 g004
Figure 5. The feature extraction process [34].
Figure 5. The feature extraction process [34].
Electronics 12 03375 g005
Figure 6. A graphical representation of the neural network graph with two hidden layers.
Figure 6. A graphical representation of the neural network graph with two hidden layers.
Electronics 12 03375 g006
Figure 7. Knee brace live classification results.
Figure 7. Knee brace live classification results.
Electronics 12 03375 g007
Figure 8. Classification accuracy in first three attempts.
Figure 8. Classification accuracy in first three attempts.
Electronics 12 03375 g008
Table 1. Movements and required range of motion.
Table 1. Movements and required range of motion.
ActivityRequired Knee Range of Motion
Walk without a limp70°
Safely climb stairs83°
Safely descend stairs90°
Get in and out of car100°
Get up from chair105°
Ride a bike115°
Garden117°
Squat125°
Table 2. Parameters and their values for the training process.
Table 2. Parameters and their values for the training process.
Number of training cycles
50
Learning rate
0.0005
Validation set size
20%
Sampling frequency
62.5 (Hz)
Input layer
33 features
First dense layer
20 neurons
Second dense layer
10 neurons
Table 3. Accuracy results of all cases.
Table 3. Accuracy results of all cases.
AccuracyF1
20456090120Overall
Training (on the Cloud)
Goniometer100%100%99.5%94.9%93.65%97.61%0.98
Goniometer on knee100%100%95.7%100%100%99.1%0.99
Knee brace100%100%100%100%100%100%1.00
Classification (on the Cloud)
Goniometer87.2%76.6%84.2%99.8%99.5%88.45%0.92
Goniometer on knee100%69.7%73.4%76.4%79.6%78.54%0.84
Knee brace100%100%100%100%100%100%1.00
Table 4. Confusion matrix of the knee brace live classification.
Table 4. Confusion matrix of the knee brace live classification.
20456090120AnomalyUncertain
20
100%0%0%0%0%0%0%
45
0%100%0%0%0%0%0%
60
0%0%100%0%0%0%0%
90
0%0%0%100%0%0%0%
120
0%0%0%0%100%0%0%
Anomaly
-------
FI Score
1.001.001.001.001.000.00
Table 5. Live classification samples of test set.
Table 5. Live classification samples of test set.
PredictionLengthAnomalyAccuracy
4510 s−0.21100%
605 s−0.16100%
2015 s−0.12100%
12015 s−0.34100%
12015 s−0.27100%
9015 s−0.25100%
9015 s−0.23100%
6015 s−0.26100%
6015 s−0.27100%
4515 s−0.20100%
4515 s−0.19100%
2015 s−0.25100%
2015 s−0.17100%
Table 6. Comparison of the cloud and on device method.
Table 6. Comparison of the cloud and on device method.
CloudOn Device
Classification
Goniometer88.45%87.4%
Goniometer on knee78.54%80.03%
Knee brace100%100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Faliagka, E.; Skarmintzos, V.; Panagiotou, C.; Syrimpeis, V.; Antonopoulos, C.P.; Voros, N. Leveraging Edge Computing ML Model Implementation and IoT Paradigm towards Reliable Postoperative Rehabilitation Monitoring. Electronics 2023, 12, 3375. https://doi.org/10.3390/electronics12163375

AMA Style

Faliagka E, Skarmintzos V, Panagiotou C, Syrimpeis V, Antonopoulos CP, Voros N. Leveraging Edge Computing ML Model Implementation and IoT Paradigm towards Reliable Postoperative Rehabilitation Monitoring. Electronics. 2023; 12(16):3375. https://doi.org/10.3390/electronics12163375

Chicago/Turabian Style

Faliagka, Evanthia, Vasileios Skarmintzos, Christos Panagiotou, Vasileios Syrimpeis, Christos P. Antonopoulos, and Nikolaos Voros. 2023. "Leveraging Edge Computing ML Model Implementation and IoT Paradigm towards Reliable Postoperative Rehabilitation Monitoring" Electronics 12, no. 16: 3375. https://doi.org/10.3390/electronics12163375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop