Next Article in Journal
Objective Assessment of Regional Stiffness in Vastus Lateralis with Different Measurement Methods: A Reliability Study
Next Article in Special Issue
Novel Conformal Skin Patch with Embedded Thin-Film Electrodes for Early Detection of Extravasation
Previous Article in Journal
An Analytic Model for Negative Obstacle Detection with Lidar and Numerical Validation Using Physics-Based Simulation
Previous Article in Special Issue
A Hand-Worn Inertial Measurement Unit for Detection of Bat–Ball Impact during Baseball Hitting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Postural Control in Young and Elderly Adults Using Deep and Machine Learning Methods with Joint–Node Plots

1
Department of Occupation Therapy, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung 82445, Taiwan
2
Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung 82445, Taiwan
3
Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Rd., Hsinchu 30010, Taiwan
4
Department of Radiology, Zuoying Branch of Kaohsiung Armed Forces General Hospital, No. 553, Junxiao Rd., Zuoying District, Kaohsiung 81342, Taiwan
5
Department of Information Engineering, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung 82445, Taiwan
6
Department of Occupational Therapy, Kaohsiung Municipal Kai-Syuan Psychiatric Hospital, No. 130, Kaisyuan 2nd Rd., Lingya District, Kaohsiung 80276, Taiwan
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(9), 3212; https://doi.org/10.3390/s21093212
Submission received: 25 March 2021 / Revised: 25 April 2021 / Accepted: 30 April 2021 / Published: 5 May 2021
(This article belongs to the Special Issue Advances and Application of Human Movement Sensors)

Abstract

:
Postural control decreases with aging. Thus, an efficient and accurate method of detecting postural control is needed. We enrolled 35 elderly adults (aged 82.06 ± 8.74 years) and 20 healthy young adults (aged 21.60 ± 0.60 years) who performed standing tasks for 40 s, performed six times. The coordinates of 15 joint nodes were captured using a Kinect device (30 Hz). We plotted joint positions into a single 2D figure (named a joint–node plot, JNP) once per second for up to 40 s. A total of 15 methods combining deep and machine learning for postural control classification were investigated. The accuracy, sensitivity, specificity, positive predicted value (PPV), negative predicted value (NPV), and kappa values of the selected methods were assessed. The highest PPV, NPV, accuracy, sensitivity, specificity, and kappa values were higher than 0.9 in validation testing. The presented method using JNPs demonstrated strong performance in detecting the postural control ability of young and elderly adults.

1. Introduction

Postural control is a complex motor function derived from several integrated neural components, including sensory and movement strategies, orientation in space, biomechanical constraints, and cognitive processing [1]. It is also the ability to build up posture against gravity and ensure that balance is maintained. Force plates are frequently used to measure balance [2,3]. Force plate equipment and motion analysis machines allow therapists to accurately describe the center of gravity (COG) location, center of body mass (COM) position, center of pressure (COP) displacement, and kinematics of movement strategies for balance. COG is the average location of the weight of an object. COM is the average position of all the parts of the body, weighted according to mass. However, the movements of body parts make assessing postural control by measuring average location, position, or displacement (COG, COM, COP) challenging [4]. Measuring postural control is difficult because postural changes may occur as a result of slight movements that are difficult to detect through simple observation by human eyes [1]. Observational balance measures such as the Berg Balance Scale are used to evaluate balance. However, they evaluate performance and not balance movement strategies. The assessment scales used by therapists tend to be subjective, and their reliability and sensitivity can be limited [5]. Measurements of postural control should identify how stably or quickly a subject performs or maintains an equilibrium position and the appropriateness and efficiency of movement strategies used to achieve or maintain the equilibrium position. Objective measures of postural control using computerized systems can allow more sensitive, specific, and responsive assessments in clinical practice.
Microsoft Kinect is a popular human motion capture tool. Kinect cameras are useful, as they provide joint center position data directly without additional processing of depth or image data [6]. Recent evidence suggests that Kinect may enable low-cost balance assessments and gait analyses [7,8,9,10,11,12,13,14,15,16]. The Kinect device has been reported to have validity for the evaluation of spatiotemporal gait parameters [17]. Kinect’s kinematic information is generally accurate enough for ergonomic assessments [18]. Postural control is the coordination of multiple joints to maintain postural stability, and the device can be used to collect large amounts of joints data to explore the coordinated relationships among the joints of the whole body during the maintenance of postural control [19]. Kinect’s kinematic parameters follow joint trajectories and, thus, can be used as a tool for measuring spatiotemporal aspects of postural control. Recent studies have demonstrated that 3D motion analysis of data from the Kinect motion capture system can be used in clinical assessments of coordination and balance and could potentially be used to monitor gross motor performance and assess motor function [20,21,22]. However, most studies have explored the displacement of COP, COM, or COG or the kinematics of body segments [7,11,13,23,24,25,26], whereas few have endeavored to classify the quality of postural control or measure slight differences in similar situations of postural control.
Neural networks have advanced at a remarkable rate, and they have practical applications in various industries, including the medical and health care industry [27,28,29,30]. Deep learning has major applications in medical diagnosis, classification, and prediction, including but not limited to health informatics [31] and biomedicine analysis [32]. Other uses of deep learning in the medical field are in medical image segmentation, registration, and detection of various anatomical regions of interest, such as in magnetic resonance imaging [33], ultrasound [34], and radiography [35]. The clinical use of images from digital cameras or depth sensors combined with deep and machine learning has promise for postural control assessment, body motion assessment, and fall detection. In one study, skeleton joints data from Kinect were used to determine human balance states, and a fall prediction algorithm based on recurrent neural networks and unbalanced posture features was proposed [36]. One fall detection method based on 3D skeleton data obtained from Kinect employed long short-term memory networks [37]. One study investigated the extent to which such deep learning–based systems provide satisfactory accuracy in exergame-relevant measures; a deep learning–based system was reported to perform as well as the gold standard system in the detection of temporal variations [38]. In one study, a long short-term memory recurrent neural network was used in a supervised machine learning architecture and a novel deep learning–refined kinematic model with good kinematic accuracy for upper limb functional assessment was developed [39]. Therefore, Kinect’s image information combined with machine and deep learning can be used to develop an effective limb functional assessment system for medical diagnosis or therapeutic evaluation.
Convolutional neural networks (CNNs) are the most widely represented class in deep learning and medical image analysis [27,28]. Deep learning methods are useful for extracting various image features, whereas machine learning approaches are efficient, rapid, and quantitative and can be used to build classification methods for numerous predictors. Hence, a combination of deep and machine learning methods was employed in this study.
Objective measurements of postural control made with a computerized system using Kinect combined with machine and deep learning can enable sensitive postural control assessment in clinical practice. Such a system might effectively classify the quality of postural control or identify minute differences between cases of similar postural control. This study is the first to combine joint node motion information with machine learning to extract joint node trajectory features and to use deep learning to classify postural control stability according to joint node trajectory patterns. This work had a twofold aim: to extract joint node trajectory plot features in order to explore the relative motion and to classify the stability of postural control according to joint node trajectory patterns.
The remainder of the paper is organized as follows. The research methodology is described in Section 2. The experimental results are presented in Section 3. The proposed features for assessing postural control performance and the joint–node plot (JNP) are discussed in Section 4. Section 5 presents the conclusion and proposes future research directions.

2. Materials and Methods

2.1. Experimental Design in Young and Elderly Adults

The experimental group was composed of elderly people who had a medical history and disabilities in daily life. They resided in a nursing home. In general, they might or could be regarded as a poor postural control group. In addition, the young adults had no medical history or any tremor problems. Therefore, the young group might or could be regarded as the control group. The study was conducted at a nursing home and on a college campus. Participants were recruited by a clinic nurse and study staff. To be included, participants had to meet the following criteria: be adults (>20 years old) to rule out developmental problems; have no restriction on physical activity; have no lower-limb discomfort and be able to maintain a double-leg stance with both eyes open for at least 40 s; and be willing to provide consent to participate in the study. The selected participants underwent the Mini-Mental State Examination (MMSE), Barthel Index (BI), and Berg Balance Scale (BBS) examinations in both young and elderly groups (Table 1). The young participants got full marks in the MMSE, BI, BBS examinations, and without any medical history. The elderly participants must be 65 years of age or older, able to cooperate balance test, communicate with each other, and read words well. Exclusion criteria were severe somatic illness or neurological or musculoskeletal impairment including cognitive impairment, chest pain, angina pectoris, joint pain during recent exercise, congestive heart failure, and advised by doctors not to exercise. In all, 35 elderly adults (aged 82.06 ± 8.74 years) and 20 healthy young adults (aged 21.60 ± 0.60 years) participated. Postural control was measured according to the records of 15 joint coordinates. All participants were required to statically stand for 40 s while measurements were captured by a Kinect device. The recording procedure was performed daily for 6 days. The participants were instructed to stand and look straight at a visual reference and stand still with their shoulders relaxed, arms at the side of the trunk, feet slightly spread apart, and knee and hip joints in the upright position for 40 s. The participants were defined as young (control group) or elderly (experimental group) adults. The target class was the elderly group (experimental group) due to the lack of postural control. The experimental setup is depicted in Figure 1. All experimental procedures were approved by the Institutional Review Board of E-DA Hospital [with approval number EMRP-107-103 (2019/01/28)].
The study flowchart includes the participants coordinates of joint nodes measured by Kinect, creation of joint node images with the coordinates, features extracted from images, and training of the classification models. The models were validated with a testing set, and the final results were recorded (Figure 2).

2.2. Measurement of Joint Coordinates

The Kinect device was made by Microsoft (Microsoft Inc., Redmond, WA, USA). It recorded joint node locations and was connected to a personal computer–based signal processing system. A data point of a joint node signal includes X, Y, and Z coordinates. Only X and Y coordinates were considered in this study because when standing still, vertical movement is negligible. The signals of the joint nodes were recorded at a frequency of 30 Hz.

2.3. Creating the JNP

The 15 joints were recorded by Kinect for 40 s (Figure 3a). However, the 1200 coordinates (X, Y) of the joints were recorded over 40 s. Hence, the JNP was created to observe postural control and examine stability over a period of 40 s (Figure 3b,c). The JNPs clearly visualized good or poor postural control and provided positioning information for the deep and machine learning approaches.

2.4. Deep and Machine Learning Methods

Combinations of deep and machine learning methods were used to classify and predict postural control in young and elderly adults. The 90 model combinations involved five CNNs, three classifiers, three epochs (10, 15, and 20), and two random splitting ratios for the training set (60% and 70%) (i.e., 5  ×  3  ×  3  ×  2 combinations).

2.4.1. Deep Learning Methods

The pre-trained CNNs applied to extract features of the JNPs were Vgg16, Vgg19, AlexNet, ResNet50, and DenseNet201. Deep CNN network technology has five primary layers: a convolutional layer, a pooling layer, a rectified linear unit layer, fully connected layers, and a softmax layer. The layers are listed in Table 2. The fully connected CNN layers extracted and stored the features of the input image. The used CNNs were described by Hsu et al. [40] (Table 2). The CNN has been confirmed to be efficient and useful for image feature extraction in the fields of biomedicine and biology [41,42,43]. Again, in the current study, the size of the epoch was set as 10, 15, or 20, and the training set percentage was 60% or 70% of data, randomly selected from the groups.

2.4.2. Machine Learning Methods

Logistic regression (LR) is often applied to analyze associations between two or more predictors or variables. Regression analysis is commonly adopted to describe relations between predictors or variables to build a linear functional model, whereas regression modeling is usually used to predict an outcome with a new predictor. LR is a binary regression model. The LR method is used in the field of machine learning and is applied for the development of classification models because of its capacity to provide tree-like or hierarchical structures. Many fields have adopted LR for prediction and classification.
A support vector machine (SVM) is a supervised learning method with the ability to powerfully generate a Hyper Plan for classifying categorical data. The SVM is generally utilized in high-dimensional or nonlinear categorization. Many useful kernels are available to improve classification performance and reduce false rates.
Naive Bayes (NB) classifiers are based on the Bayesian theorem with a naïve independence hypothesis between the adopted predictors or features. NB classifiers provide higher accuracy under bundle with kernel density estimation [44]. They also offer high flexibility for linear or nonlinear relations among variables (features/predictors) in classification problems. The computing cost takes linear time by compared those of expensive iterative approximations of classifiers.
To classify the postural control of the young and elderly groups, these algorithms were applied to the extracted features as deep and machine learning methods with JNP.

2.5. Evaluating Model Performance

The coordinates of 15 joints continually measured for 40 s were plotted in one figure for each candidate. Each participant had six figures as a result of the replicated runs. Hence, a total of 120 and 150 JNPs were created for the young and elderly groups, respectively. The size of a JNP was 875 × 656 pixels with 24 bits per pixel. The testing sets were 48 and 60 JNPs (40%) or 36 and 45 JNPs (30%), randomly selected from the young and elderly groups, respectively. The original data were partitioned into training and testing sets randomly without overlapping samples in the sets.
The testing sets were used to evaluate model performance. The validated performance of the presented methods is typically used to popular index. A confusion matrix is often used to assess model suitability, including its accuracy, sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), and kappa value. The indices (i.e., six evaluated values) were sorted in ascending order according to the kappa value. Then, a radar plot was developed to display the indexes for the models. A radar plot was developed to display those indexes for the presented models.

3. Results

3.1. Model Performance: 60% of Data for Training and 40% of Data for Testing

A total of 45 models were obtained according to five deep learning methods, with three batch sizes and three machine learning algorithms, and a 60% random splitting ratio of the original data. Another 45 models were obtained with a 70% random splitting ratio of the data. Figure 4 and Figure 5 present the validation results of testing sets with 40% and 30% random splitting ratios, respectively, of the original data.
The size of the epoch used in CNN models is crucial for making inferences regarding classification performance. Therefore, epoch size was considered during evaluation of the models. Figure 4 depicts the performance of 45 models (combinations) with 40% of the data used as the testing set. Table 3 details the performance of five CNNs. The model combining VGG16 and SVM under epoch 15 (M29) had the best performance out of all the models. The accuracy, sensitivity, specificity, PPV, NPV, and kappa value were 0.98, 0.99, 0.95, 0.98, 0.98, and 0.95, respectively. These results suggest that VGG16 extracted useful JNP features and the SVM feasibly classified the features. The SVM demonstrated high dimensional classification capacity and efficiency when combined with AlexNet, DenseNet201, ResNet50, VGG16, and VGG19, with an accuracy of 0.95 or higher generated by these combinations.

3.2. Model Performance: 70% of the Data Used for Training and 30% of the Data Used for Testing

Figure 5 portrays the performance of 45 models (combinations) when 70% and 30% of the data was used for training and testing, respectively. Table 4 presents the performance of five CNNs. The combination of VGG19 and SVM under epoch 20 (M90) had the best performance. The accuracy, sensitivity, specificity, PPV, NPV, and kappa value were 0.99, 0.99, 0.97, 0.99, 0.98, and 0.97, respectively. The second-highest performance was achieved by M89, which was a combination of VGG16 and SVM under epoch 20. Both VGG16 and VGG19 extracted useful JNP features, and the SVM feasibly classified them. The SVM demonstrated good classification ability and efficiency when combined with AlexNet, DenseNet201, ResNet50, VGG16, and VGG19, with accuracies of 0.95 or higher.
The combination of VGG19 and SVM with 70% of the data used for training in epoch 20 (M90) could be used to classify postural control in young and elderly groups through the JNP.

4. Discussion

4.1. The Informative JNP

Using joint motion trajectories instead of COP or COM displacement for analysis enables the evaluation of posture control ability as well as the posture control strategies used to achieve balance [45,46]. In the current study, the JNPs of the elderly group indicated that they tended to use an extreme joint coordination mode, an inter-joint coordination strategy characterized by total joint dependence, to maintain balance when standing still [19].
The JNP provided information on postural control, but also on tremors. No screening test or tool is available for the early detection of Parkinson’s disease. The JNP map can help in evaluating coordinated interactions among joints and discovering involuntary tremors of each segment when an individual is standing still [47]. The stability of the torso and proximal joints in the elderly adult group was similar to that in the young adult group, but the forearm and knee joints exhibited slight tremors (Figure 6a), which may have been psychogenic or physiological tremors. Figure 6b displays the postural stability of joints in various parts of the body, which was better than most of elderly people in the study but the forearm and hand joints exhibited obviously psychogenic or physiological tremors. The postural stability of the joints of various parts of the body in Figure 6d is similar to that in Figure 6c and may indicate a postural tremor. In some cases, the left forearm shook more, but the whole body shook horizontally (Figure 6c). Figure 6c displayed a typical pattern of postural stability of the joints in the elderly adult group and possibly indicating a postural tremor and or psychogenic tremor. When a postural tremor occurs, further testing is required to confirm its cause, which may be, for example, primary cerebellar disease, brain injury, dystonia, alcohol, or drugs. In Figure 6e, symmetrical shaking of the wrists and lower limbs occurs on both sides; this is suspected to be an essential tremor or Parkinsonian tremor. In Figure 6f, whole-body shaking, including shaking of the feet, is intense and asymmetrical and leads to instability when the individual is standing; in such cases, a Parkinsonian tremor is suspected. When a postural tremor occurs in a case, further testing is required to confirm the cause of the jitter, which may be caused by other diseases, such as primary cerebellar disease, dystonia, Parkinson’s disease, drugs, etc. Hence, the JNP may be used to visualize shaking and relations between tremors and diseases.

4.2. Combined Deep and Machine Learning

In this study, the VGG16, VGG19, AlexNet, ResNet50, and DenseNet201 were used to extract image features for the development of SVM classification models. Although several fully connected layers (FCLs) were present in the CNN, we did not survey and compare all of them. Only the last FCL of the CNN was applied to extract features of images for the SVM, LR, and NB classifiers. The SVM was regarded as an efficient classifier for detecting and classifying postural control in young and elderly adults on the basis of their JNPs.
M29 combined VGG16 and the SVM (training set, 70%; validation set, 30%), and M90 combined VGG19 and SVM to classify balance function (training set, 80%; validation set, 20%). The accuracy and kappa values of M29 and M90 were (98%, 95%) and (99%, 97%), respectively. The validation results indicated that both M29 and M90 could classify balance function in the elderly group with high agreement and consistency. Additionally, the deep learning component of the VGG architecture provided useful features of images for the SVM. Therefore, the SVM archived to classify the task of detected balance function between the young and elderly adults.
Table 5 summarizes the results in Table 3 and Table 4. All 27 methods selected achieved kappa values of 0.88 or higher. AlexNet, VGG16, and VGG19 each appeared six times. DenseNet201 and ResNet50 appeared four and five times, respectively. The minimum accuracy generated by AlexNet, VGG16, and VGG19 was 0.97. VGG19 combined with the SVM had the highest maximum accuracy among the five deep learning methods with the SVM classifier.

4.3. Comparison with Reported Results

The proposed methods were compared with previously developed methods with respect to the results listed in Table 5. SVMs, random forest models, and cohorts have been applied to detect motor [48,49], balance, or gait function [50,51,52,53,54,55,56,57,58,59,60]. The highest accuracy in classifying motor function was 97%, achieved by an SVM. The highest accuracy for classifying gait or balance function was 96.7%, also achieved by an SVM. Thus, SVMs were proven successful in classification tasks. However, the proposed methods achieved higher accuracies in terms of reasonability and feasibility than did the other methods listed in Table 6.
To further test the reliability of the proposed methods in classifying postural control, a future study might compare the results a gold standard detection method, such as functional assessment or balance assessment.

5. Conclusions

The JNP can reveal postural coordinates in a two-dimensional image. Moreover, it provides visual information on postural swing and is suitable for the classification and detection of postural control in young and elderly adults when used with the deep and machine learning methods developed in this study. The best performance was achieved through the combination of VGG19 and SVM with 70% of the data used for the training set and an epoch of 20. The correlations between JNPs and clinical tremors can be investigated in future research. Indeed, both the elderly and the young group should be screened or some gold standard detection (e.g., psychologist, radiologist, or related medical assessor) be applied to verify the reality of both groups. In this study, the lack of a serious screening test or use of a gold standard detection method for each subject is noted as a limitation of the research. Future work can incorporate more rigorous screening.

Author Contributions

Conceptualization, P.L., T.-B.C. and C.-H.L.; Data curation, T.-B.C.; Formal analysis, P.L., T.-B.C. and C.-H.L.; Investigation, C.-Y.W.; Methodology, P.L., T.-B.C. and C.-H.L.; Project administration, P.L.; Software, S.-Y.H.; Supervision, P.L. and C.-H.L.; Writing—original draft, T.-B.C. and C.-H.L.; Writing—review and editing, P.L. and C.-H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the guidelines of the Declaration of Helsinki. All experimental procedures were approved by the Institutional Review Board of the E-DA Hospital, Kaohsiung, Taiwan (approval number EMRP-107-132).

Informed Consent Statement

Written informed consent was obtained from the participants.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Ministry of Science and Technology of Taiwan for partially supporting this study with Contract Nos. 106-2118-M-214-001 and 109-2118-M-214-001. The authors would like to thank Chen-Wen Yen for his valuable inputs on technical supports during the initial phase of research.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Appendix A

The 90 investigated methods with combined CNNs and classifiers; the percentage of data used for the training set and the epoch size are listed.
EpochRatioCNNLearnerModelEpochRatioCNNLearnerModel
100.6AlexNetLRM1150.7AlexNetLRM46
100.6DenseNet201LRM2150.7DenseNet201LRM47
100.6ResNet50LRM3150.7ResNet50LRM48
100.6VGG16LRM4150.7VGG16LRM49
100.6VGG19LRM5150.7VGG19LRM50
100.6AlexNetNBM6150.7AlexNetNBM51
100.6DenseNet201NBM7150.7DenseNet201NBM52
100.6ResNet50NBM8150.7ResNet50NBM53
100.6VGG16NBM9150.7VGG16NBM54
100.6VGG19NBM10150.7VGG19NBM55
100.6AlexNetSVMM11150.7AlexNetSVMM56
100.6DenseNet201SVMM12150.7DenseNet201SVMM57
100.6ResNet50SVMM13150.7ResNet50SVMM58
100.6VGG16SVMM14150.7VGG16SVMM59
100.6VGG19SVMM15150.7VGG19SVMM60
100.7AlexNetLRM16200.6AlexNetLRM61
100.7DenseNet201LRM17200.6DenseNet201LRM62
100.7ResNet50LRM18200.6ResNet50LRM63
100.7VGG16LRM19200.6VGG16LRM64
100.7VGG19LRM20200.6VGG19LRM65
100.7AlexNetNBM21200.6AlexNetNBM66
100.7DenseNet201NBM22200.6DenseNet201NBM67
100.7ResNet50NBM23200.6ResNet50NBM68
100.7VGG16NBM24200.6VGG16NBM69
100.7VGG19NBM25200.6VGG19NBM70
100.7AlexNetSVMM26200.6AlexNetSVMM71
100.7DenseNet201SVMM27200.6DenseNet201SVMM72
100.7ResNet50SVMM28200.6ResNet50SVMM73
100.7VGG16SVMM29200.6VGG16SVMM74
100.7VGG19SVMM30200.6VGG19SVMM75
150.6AlexNetLRM31200.7AlexNetLRM76
150.6DenseNet201LRM32200.7DenseNet201LRM77
150.6ResNet50LRM33200.7ResNet50LRM78
150.6VGG16LRM34200.7VGG16LRM79
150.6VGG19LRM35200.7VGG19LRM80
150.6AlexNetNBM36200.7AlexNetNBM81
150.6DenseNet201NBM37200.7DenseNet201NBM82
150.6ResNet50NBM38200.7ResNet50NBM83
150.6VGG16NBM39200.7VGG16NBM84
150.6VGG19NBM40200.7VGG19NBM85
150.6AlexNetSVMM41200.7AlexNetSVMM86
150.6DenseNet201SVMM42200.7DenseNet201SVMM87
150.6ResNet50SVMM43200.7ResNet50SVMM88
150.6VGG16SVMM44200.7VGG16SVMM89
150.6VGG19SVMM45200.7VGG19SVMM90

References

  1. Horak, F.B. Clinical Measurement of Postural Control in Adults. Phys. Ther. 1987, 67, 1881–1885. [Google Scholar] [CrossRef]
  2. Mengarelli, A.; Verdini, F.; Cardarelli, S.; Di Nardo, F.; Burattini, L.; Fioretti, S. Balance assessment during squatting exercise: A comparison between laboratory grade force plate and a commercial, low-cost device. J. Biomech. 2018, 71, 264–270. [Google Scholar] [CrossRef]
  3. Koltermann, J.J.; Gerber, M.; Beck, H.; Beck, M. Validation of the HUMAC Balance System in Comparison with Conventional Force Plates. Technologies 2017, 5, 44. [Google Scholar] [CrossRef] [Green Version]
  4. Leach, J.M.; Mancini, M.; Peterka, R.J.; Hayes, T.L.; Horak, F.B. Validating and Calibrating the Nintendo Wii Balance Board to Derive Reliable Center of Pressure Measures. Sensors 2014, 14, 18244–18267. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Noamani, A.; Nazarahari, M.; Lewicke, J.; Vette, A.H.; Rouhani, H. Validity of using wearable inertial sensors for assessing the dynamics of standing balance. Med. Eng. Phys. 2020, 77, 53–59. [Google Scholar] [CrossRef]
  6. Maudsley-Barton, S.; Hoon Yap, M.; Bukowski, A.; Mills, R.; McPhee, J. A new process to measure postural sway using a kinect depth camera during a sensory organisation test. PLoS ONE 2020, 15, e0227485. [Google Scholar] [CrossRef]
  7. Yang, Y.; Pu, F.; Li, Y.; Li, S.; Fan, Y.; Li, D. Reliability and validity of Kinect RGB-D sensor for assessing standing balance. IEEE Sens. J. 2014, 14, 1633–1638. [Google Scholar] [CrossRef]
  8. González, A.; Hayashibe, M.; Bonnet, V.; Fraisse, P. Whole body center of mass estimation with portable sensors: Using the statically equivalent serial chain and a kinect. Sensors 2014, 14, 16955–16971. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Pu, F.; Sun, S.; Wang, L.; Li, Y.; Yu, H.; Yang, Y.; Zhao, Y.; Li, S. Investigation of key factors affecting the balance function of older adults. Aging Clin. Exp. Res. 2015, 27, 139–147. [Google Scholar] [CrossRef]
  10. Clark, R.A.; Pua, Y.-H.; Oliveira, C.C.; Bower, K.J.; Thilarajah, S.; McGaw, R.; Hasanki, K.; Mentiplay, B.F. Reliability and concurrent validity of the Microsoft Xbox One Kinect for assessment of standing balance and postural control. Gait Posture 2015, 42, 210–213. [Google Scholar] [CrossRef]
  11. Lim, D.; Kim, C.; Jung, H.; Jung, D.; Chun, K.J. Use of the Microsoft Kinect system to characterize balance ability during balance training. Clin. Interv. Aging 2015, 10, 1077. [Google Scholar] [CrossRef] [Green Version]
  12. Xu, X.; McGorry, R.W. The validity of the first and second generation Microsoft Kinect™ for identifying joint center locations during static postures. Appl. Ergon. 2015, 49, 47–54. [Google Scholar] [CrossRef]
  13. Lv, Z.; Penades, V.; Blasco, S.; Chirivella, J.; Gagliardo, P. Evaluation of Kinect2 based balance measurement. Neurocomputing 2016, 208, 290–298. [Google Scholar] [CrossRef]
  14. Puh, U.; Hoehlein, B.; Deutsch, J.E. Validity and reliability of the Kinect for assessment of standardized transitional movements and balance: Systematic review and translation into practice. Phys. Med. Rehabil. Clin. 2019, 30, 399–422. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, S.T.; Kang, D.W.; Seo, J.W.; Kim, D.H.; Kim, T.H.; Choi, J.S.; Tack, G.R. Evaluation of balance ability of the elderly using kinect sensor. Trans. Korean Inst. Electr. Eng. 2017, 66, 439–446. [Google Scholar] [CrossRef] [Green Version]
  16. Hsiao, M.-Y.; Li, C.-M.; Lu, I.-S.; Lin, Y.-H.; Wang, T.-G.; Han, D.S. An investigation of the use of the Kinect system as a measure of dynamic balance and forward reach in the elderly. Clin. Rehabil. 2018, 32, 473–482. [Google Scholar] [CrossRef] [PubMed]
  17. Springer, S.; Yogev Seligmann, G. Validity of the Kinect for Gait Assessment: A Focused Review. Sensors 2016, 16, 194. [Google Scholar] [CrossRef]
  18. Plantard, P.; Auvinet, E.; Pierres, A.-S.L.; Multon, F. Pose Estimation with a Kinect for Ergonomic Studies: Evaluation of the Accuracy Using a Virtual Mannequin. Sensors 2015, 15, 1785–1803. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, C.-H.; Lee, P.; Chen, Y.-L.; Yen, C.-W.; Yu, C.-W. Study of Postural Stability Features by Using Kinect Depth Sensors to Assess Body Joint Coordination Patterns. Sensors 2020, 20, 1291. [Google Scholar] [CrossRef] [Green Version]
  20. Heidt, C.; Vrankovic, M.; Mendoza, A.; Hollander, K.; Dreher, T.; Rueger, M. Simplified Digital Balance Assessment in Typically Developing School Children. Gait Posture 2021, 84, 389–394. [Google Scholar] [CrossRef]
  21. Oña, E.D.; Jardón, A.; Balaguer, C. Automatic Assessment of Arm Motor Function and Postural Stability in Virtual Scenarios: Towards a Virtual Version of the Fugl-Meyer Test. In Proceedings of the 2020 IEEE 8th International Conference on Serious Games and Applications for Health (SeGAH), Vancouver, BC, Canada, 12–14 August 2020; pp. 1–6. [Google Scholar] [CrossRef]
  22. Otte, K.; Kayser, B.; Mansow-Model, S.; Verrel, J.; Paul, F.; Brandt, A.U.; Schmitz-Hübsch, T. Accuracy and Reliability of the Kinect Version 2 for Clinical Measurement of Motor Function. PLoS ONE 2016, 11, e166532. [Google Scholar] [CrossRef]
  23. Eltoukhy, M.A.; Kuenze, C.; Oh, J.; Signorile, J.F. Validation of static and dynamic balance assessment using Microsoft Kinect for young and elderly populations. IEEE J. Biomed. Health Inform. 2017, 22, 147–153. [Google Scholar] [CrossRef] [PubMed]
  24. Eltoukhy, M.A.; Kuenze, C.; Oh, J.; Wooten, S.; Signorile, J. Kinect-based assessment of lower limb kinematics and dynamic postural control during the star excursion balance test. Gait Posture 2017, 58, 421–427. [Google Scholar] [CrossRef] [PubMed]
  25. Hayashibe, M.; González, A.; Tournier, M. Personalized balance and fall risk visualization with Kinect Two. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 4863–4866. [Google Scholar] [CrossRef]
  26. Ruff, J.; Wang, T.L.; Quatman-Yates, C.C.; Phieffer, L.S.; Quatman, C.E. Commercially available gaming systems as clinical assessment tools to improve value in the orthopaedic setting: A systematic review. Injury 2015, 46, 178–183. [Google Scholar] [CrossRef] [PubMed]
  27. Bakator, M.; Radosav, D. Deep learning and medical diagnosis: A review of literature. Multimodal Technol. Interact. 2018, 2, 47. [Google Scholar] [CrossRef] [Green Version]
  28. Casalino, G.; Castellano, G.; Consiglio, A.; Liguori, M.; Nuzziello, N.; Primiceri, D. A Predictive Model for MicroRNA Expressions in Pediatric Multiple Sclerosis Detection. In Modeling Decisions for Artificial Intelligence; MDAI 2019; Torra, V., Narukawa, Y., Pasi, G., Viviani, M., Eds.; Springer: Cham, Switzerland, 2019; Volume 11676, pp. 177–188. [Google Scholar] [CrossRef]
  29. Lee, J.G.; Jun, S.; Cho, Y.W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef] [Green Version]
  30. Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef] [PubMed]
  31. Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.-Z. Deep learning for health informatics. IEEE J. Biomed. Health Inf. 2017, 21, 4–21. [Google Scholar] [CrossRef] [Green Version]
  32. Mamoshina, P.; Vieira, A.; Putin, E.; Zhavoronkov, A. Applications of Deep Learning in Biomedicine. Mol. Pharm. 2016, 13, 1445–1454. [Google Scholar] [CrossRef]
  33. Liu, J.; Pan, Y.; Li, M.; Chen, Z.; Tang, L.; Lu, C.; Wang, J. Applications of deep learning to MRI images: A survey. Big Data Mining Anal. 2018, 1, 1–18. [Google Scholar] [CrossRef]
  34. Liu, S.; Wang, Y.; Yang, X.; Lei, B.; Liu, L.; Li, S.X.; Ni, D.; Wang, T. Deep learning in medical ultrasound analysis: A review. Engineering 2019, 5, 261–275. [Google Scholar] [CrossRef]
  35. Sahiner, B.; Pezeshk, A.; Hadjiiski, L.M.; Wang, X.; Drukker, K.; Cha, K.H.; Summers, R.M.; Giger, M.L. Deep learning in medical imaging and radiation therapy. Med. Phys. 2019, 46, e1–e36. [Google Scholar] [CrossRef] [Green Version]
  36. Tao, X.; Yun, Z. Fall prediction based on biomechanics equilibrium using Kinect. Int. J. Distrib. Sens. Netw. 2017, 13, 1550147717703257. [Google Scholar] [CrossRef]
  37. Xu, T.; Zhou, Y. Elders’ fall detection based on biomechanical features using depth camera. Int. J. Wavelets Multiresolut. Inf. Process. 2018, 16, 1840005. [Google Scholar] [CrossRef]
  38. Vonstad, E.K.; Su, X.; Vereijken, B.; Bach, K.; Nilsen, J.H. Comparison of a Deep Learning-Based Pose Estimation System to Marker-Based and Kinect Systems in Exergaming for Balance Training. Sensors 2020, 20, 6940. [Google Scholar] [CrossRef] [PubMed]
  39. Ma, Y.; Liu, D.; Cai, L. Deep Learning-Based Upper Limb Functional Assessment Using a Single Kinect v2 Sensor. Sensors 2020, 20, 1903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Hsu, S.Y.; Yeh, L.R.; Chen, T.B.; Du, W.C.; Huang, Y.H.; Twan, W.H.; Lin, M.C.; Hsu, Y.H.; Wu, Y.C.; Chen, H.Y. Classification of the Multiple Stages of Parkinson’s Disease by a Deep Convolution Neural Network Based on 99mTc-TRODAT-1 SPECT Images. Molecules 2020, 19, 4792. [Google Scholar] [CrossRef] [PubMed]
  41. Currie, G.; Hawk, K.E.; Rohren, E.; Vial, A.; Klein, R. Machine Learning and Deep Learning in Medical Imaging: Intelligent Imaging. J. Med. Imaging Radiat. Sci. 2019, 50, 477–487. [Google Scholar] [CrossRef] [Green Version]
  42. Nensa, F.; Demircioglu, A.; Rischpler, C. Artificial Intelligence in Nuclear Medicine. J. Nucl. Med. 2019, 60, 29S–37S. [Google Scholar] [CrossRef]
  43. Wang, F.; Preininger, A. AI in Health: State of the Art, Challenges, and Future Directions. Yearb. Med. Inform. 2019, 28, 16–26. [Google Scholar] [CrossRef] [Green Version]
  44. Piryonesi, S.M.; El-Diraby, T.E. Role of Data Analytics in Infrastructure Asset Management: Overcoming Data Size and Quality Problems. J. Transp. Eng. Part B Pavements 2020, 146, 04020022. [Google Scholar] [CrossRef]
  45. Gonzalez, D.; Imbiriba, L.; Jandre, F. Could Postural Strategies Be Assessed with the Microsoft Kinect v2? In World Congress on Medical Physics and Biomedical Engineering 2018; Lhotska, L., Sukupova, L., Lacković, I., Ibbott, G., Eds.; Springer: Singapore, 2019; Volume 68, pp. 725–728. [Google Scholar] [CrossRef]
  46. Bemal, V.E.; Satterthwaite, N.A.; Napoli, A.; Glass, S.M.; Tucker, C.A.; Obeid, I. Kinect v2 accuracy as a body segment measuring tool. In Proceedings of the2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA, 2 December 2017; pp. 1–3. [Google Scholar] [CrossRef]
  47. Geman, O.; Costin, H. Automatic assessing of tremor severity using nonlinear dynamics, artificial neural networks and neuro-fuzzy classifier. Adv. Electr. Comput. Eng. 2014, 14, 133–139. [Google Scholar] [CrossRef]
  48. Di Lazzaro, G.; Ricci, M.; Al-Wardat, M.; Schirinzi, T.; Scalise, S.; Giannini, F.; Mercuri, N.B.; Saggio, G.; Pisani, A. Technology-based objective measures detect subclinical axial signs in untreated, de novo Parkinson’s disease. J. Parkinson’s Dis. 2020, 10, 113–122. [Google Scholar] [CrossRef] [PubMed]
  49. Buongiorno, D.; Bortone, I.; Cascarano, G.D.; Trotta, G.F.; Brunetti, A.; Bevilacqua, V. A low-cost vision system based on the analysis of motor features for recognition and severity rating of Parkinson’s Disease. BMC Med. Inform. Decis. Mak. 2019, 19, 243. [Google Scholar] [CrossRef]
  50. Zhou, Y.; Romijnders, R.; Hansen, C.; van Campen, J.; Maetzler, W.; Hortobágyi, T.; Lamoth, C.J. The detection of age groups by dynamic gait outcomes using machine learning approaches. Sci. Rep. 2020, 10, 1–12. [Google Scholar] [CrossRef]
  51. Craig, J.J.; Bruetsch, A.P.; Huisinga, J.M. Coordination of trunk and foot acceleration during gait is affected by walking velocity and fall history in elderly adults. Aging Clin. Exp. Res. 2019, 31, 943–950. [Google Scholar] [CrossRef]
  52. Craig, J.J.; Bruetsch, A.P.; Lynch, S.G.; Huisinga, J.M. Altered visual and somatosensory feedback affects gait stability in persons with multiple sclerosis. Hum. Mov. Sci. 2019, 66, 355–362. [Google Scholar] [CrossRef]
  53. Bao, T.; Klatt, B.N.; Whitney, S.L.; Sienko, K.H.; Wiens, J. Automatically evaluating balance: A machine learning approach. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 179–186. [Google Scholar] [CrossRef]
  54. Bao, T.; Klatt, B.N.; Carender, W.J.; Kinnaird, C.; Alsubaie, S.; Whitney, S.L.; Sienko, K.H. Effects of long-term vestibular rehabilitation therapy with vibrotactile sensory augmentation for people with unilateral vestibular disorders—A randomized preliminary study. J. Vestib. Res. 2019, 29, 323–334. [Google Scholar] [CrossRef] [Green Version]
  55. Gordt, K.; Gerhardy, T.; Najafi, B.; Schwenk, M. Effects of Wearable Sensor-Based Balance and Gait Training on Balance, Gait, and Functional Performance in Healthy and Patient Populations: A Systematic Review and Meta-Analysis of Randomized Controlled Trials. Gerontology 2018, 64, 74–89. [Google Scholar] [CrossRef]
  56. Niu, J.; Zheng, Y.; Liu, H.; Chen, X.; Ran, L. Stumbling prediction based on plantar pressure distribution. Work 2019, 64, 705–712. [Google Scholar] [CrossRef] [PubMed]
  57. Wiedemeijer, M.M.; Otten, E. Effects of high heeled shoes on gait. A review. Gait Posture 2018, 61, 423–430. [Google Scholar] [CrossRef] [PubMed]
  58. Lerebourg, L.; L’Hermette, M.; Menez, C.; Coquart, J. The effects of shoe type on lower limb venous status during gait or exercise: A systematic review. PLoS ONE 2020, 15, e0239787. [Google Scholar] [CrossRef] [PubMed]
  59. Roongbenjawan, N.; Siriphorn, A. Accuracy of modified 30-s chair-stand test for predicting falls in older adults. Ann. Phys. Rehabil. Med. 2020, 63, 309–315. [Google Scholar] [CrossRef] [PubMed]
  60. Boonsinsukh, R.; Khumnonchai, B.; Saengsirisuwan, V.; Chaikeeree, N. The effect of the type of foam pad used in the modified Clinical Test of Sensory Interaction and Balance (mCTSIB) on the accuracy in identifying older adults with fall history. Hong Kong Physiother. J. 2020, 40, 133–143. [Google Scholar] [CrossRef]
Figure 1. The Kinect device was placed 75 cm above the floor and 2 m in front of the participants. The participants stood still (with arms at their sides) in a comfortable stance for 40 s. The participants were defined as (a) elderly adults (experimental group) or (b) young adults (control group).
Figure 1. The Kinect device was placed 75 cm above the floor and 2 m in front of the participants. The participants stood still (with arms at their sides) in a comfortable stance for 40 s. The participants were defined as (a) elderly adults (experimental group) or (b) young adults (control group).
Sensors 21 03212 g001
Figure 2. Study flowchart.
Figure 2. Study flowchart.
Sensors 21 03212 g002
Figure 3. (a) The Kinect device recorded the positions of 15 joints; joint–node plots of (b) an elderly adults and (c) a young adult over a period of 40 s.
Figure 3. (a) The Kinect device recorded the positions of 15 joints; joint–node plots of (b) an elderly adults and (c) a young adult over a period of 40 s.
Sensors 21 03212 g003
Figure 4. Performance of 45 models using 60% of the data as the training set. Model details are listed in Appendix A.
Figure 4. Performance of 45 models using 60% of the data as the training set. Model details are listed in Appendix A.
Sensors 21 03212 g004
Figure 5. Performance of 45 models trained using 70% of the data. Model details are provided in Appendix A.
Figure 5. Performance of 45 models trained using 70% of the data. Model details are provided in Appendix A.
Sensors 21 03212 g005
Figure 6. Abnormal patterns of postural control in several participants. (a) The forearm and knee joints exhibited slight tremors. (b) The forearm and hand joints exhibited obviously tremors. (c) The whole body shook horizontally and the left forearm shook more. (d) The whole body shook horizontally. (e) Symmetrical shaking of the wrists and lower limbs occurs on both sides. (f) Individual stands with whole-body asymmetrical shaking.
Figure 6. Abnormal patterns of postural control in several participants. (a) The forearm and knee joints exhibited slight tremors. (b) The forearm and hand joints exhibited obviously tremors. (c) The whole body shook horizontally and the left forearm shook more. (d) The whole body shook horizontally. (e) Symmetrical shaking of the wrists and lower limbs occurs on both sides. (f) Individual stands with whole-body asymmetrical shaking.
Sensors 21 03212 g006
Table 1. Demographic characteristics of the samples (N = 35 + 20 = 55).
Table 1. Demographic characteristics of the samples (N = 35 + 20 = 55).
IndexElderly (n = 35)Young (n = 20)
MeanSDMeanSD
Age80.178.5620.001.97
MMSE24.913.4030.000.00
BI86.574.16100.000.00
BBS47.096.4756.000.00
Table 2. Pre-trained models used in this study [40].
Table 2. Pre-trained models used in this study [40].
Pre-Trained ModelInput Image SizeDesign LayersParametric Size (MB)Layer of Features
AlexNet227 × 2272522717th
DenseNet201224 × 22470977706th
ResNet50224 × 22417796175th
VGG16224 × 224412733rd
VGG19224 × 2244753539th
Table 3. Models trained with 60% of the data had accuracy and kappa values of no less than 0.95 and 0.88, respectively.
Table 3. Models trained with 60% of the data had accuracy and kappa values of no less than 0.95 and 0.88, respectively.
ModelEpochCNNLearnerAccuracySensitivitySpecificityPPVNPVKappa
M2715DenseNet201SVM0.950.960.920.960.920.88
M2815ResNet50SVM0.950.960.930.970.910.88
M1210DenseNet201SVM0.960.990.880.950.970.90
M4320ResNet50SVM0.960.970.930.970.930.90
M1110AlexNetSVM0.970.980.940.970.950.92
M1510VGG19SVM0.970.980.940.970.950.92
M4120AlexNetSVM0.970.970.950.980.940.92
M4420VGG16SVM0.970.980.930.970.960.92
M3015VGG19SVM0.970.990.930.970.980.93
M4520VGG19SVM0.970.970.960.980.940.93
M1410VGG16SVM0.980.970.980.990.940.94
M2615AlexNetSVM0.980.980.980.990.950.95
M2915VGG16SVM0.980.990.950.980.980.95
Table 4. Models trained with 70% of the data achieved accuracy and kappa values of no less than 0.95 and 0.88, respectively.
Table 4. Models trained with 70% of the data achieved accuracy and kappa values of no less than 0.95 and 0.88, respectively.
ModelEpochCNNLearnerAccuracySensitivitySpecificityPPVNPVKappa
M7315ResNet50SVM0.950.970.910.960.930.89
M5710DenseNet201SVM0.960.980.910.960.950.90
M8820ResNet50SVM0.960.970.940.970.940.91
M5810ResNet50SVM0.970.970.970.990.920.92
M8720DenseNet201SVM0.970.970.950.980.940.92
M6010VGG19SVM0.970.970.970.990.940.93
M5610AlexNetSVM0.980.990.940.970.980.94
M7115AlexNetSVM0.980.990.950.980.970.94
M7515VGG19SVM0.980.990.950.980.970.94
M5910VGG16SVM0.980.990.970.990.970.96
M7415VGG16SVM0.981.000.940.971.000.96
M8620AlexNetSVM0.980.980.980.990.950.96
M8920VGG16SVM0.980.980.980.990.950.96
M9020VGG19SVM0.990.990.970.990.980.97
Table 5. Summary of the results in Table 3 and Table 4.
Table 5. Summary of the results in Table 3 and Table 4.
Deep LearningCountsMin. ACCMax. ACC
AlexNet60.970.98
DenseNet20140.950.97
ResNet5050.950.97
VGG1660.970.98
VGG1960.970.99
Total27
Note: Min. ACC and Max. ACC are the minimum and maximum accuracy.
Table 6. Comparison of the proposed methods with methods developed in related studies.
Table 6. Comparison of the proposed methods with methods developed in related studies.
AuthorYearMethodsTaskSample SizePerformance
Di Lazzaro G. et al. [48]2020SVMmotor65ACC: 97% (SVM)
Yuhan Zhou et al. [50]2020SVMgait239ACC: 89% (SVM)
RFACC: 73% (RF)
ANNACC: 90% (ANN)
Tian Bao et al. [53]2019SVMbalance16ACC: 82% (SVM)
Jianwei Niu et al. [56]2019SVMgait12ACC: 96.7% (SVM)
Narintip Roongbenjawan et al. [59]2020Cohort Studybalance73SEN: 92%
SPE: 81%
The Presented Methods2021DL + MLbalance55ACC: 98% (VGG16 + SVM)
ACC: 99% (VGG19 + SVM)
Note: ACC is accuracy. SPE is specificity. RF is random forest. SVM is support vector machine. DL is deep learning. ML is machine learning.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, P.; Chen, T.-B.; Wang, C.-Y.; Hsu, S.-Y.; Liu, C.-H. Detection of Postural Control in Young and Elderly Adults Using Deep and Machine Learning Methods with Joint–Node Plots. Sensors 2021, 21, 3212. https://doi.org/10.3390/s21093212

AMA Style

Lee P, Chen T-B, Wang C-Y, Hsu S-Y, Liu C-H. Detection of Postural Control in Young and Elderly Adults Using Deep and Machine Learning Methods with Joint–Node Plots. Sensors. 2021; 21(9):3212. https://doi.org/10.3390/s21093212

Chicago/Turabian Style

Lee, Posen, Tai-Been Chen, Chi-Yuan Wang, Shih-Yen Hsu, and Chin-Hsuan Liu. 2021. "Detection of Postural Control in Young and Elderly Adults Using Deep and Machine Learning Methods with Joint–Node Plots" Sensors 21, no. 9: 3212. https://doi.org/10.3390/s21093212

APA Style

Lee, P., Chen, T. -B., Wang, C. -Y., Hsu, S. -Y., & Liu, C. -H. (2021). Detection of Postural Control in Young and Elderly Adults Using Deep and Machine Learning Methods with Joint–Node Plots. Sensors, 21(9), 3212. https://doi.org/10.3390/s21093212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop