**3. Results**

Figure 5 presents the results from the correlation analysis (in terms of R<sup>2</sup> ) using data of five ratio-based body measurements for slow, normal and fast walk speeds. According to the interpretations (i.e., weak correlation: 0.10–0.39 and moderate correlation: 0.40–0.69, strong correlation: 0.70–0.89, very strong correlation: 0.90–1.00) [39], the R<sup>2</sup> values between HW1 vs. HW2, HW2 vs. HW3, HW2 vs. A1, HW1 vs. A2, HW2 vs. A2, HW3 vs. A2 and A1 vs. A2 were generally found to be weak for slow and normal walk speeds, whereas for fast walk speeds, weak and moderate R<sup>2</sup> values were found between HW1 vs. A2, HW2 vs. A2, HW3 vs. A2 and A1 vs. A2 and between HW1 vs. HW2, HW2 vs. HW3, and HW2 vs. A1, respectively. In addition, moderate R<sup>2</sup> values were found between HW1 vs. HW3, HW1 vs. A1, and HW3 vs. A1 for slow walk speeds, but the corresponding values obtained for normal and fast walk speeds were generally strong.

**Figure 5.** Coefficient of determination (R<sup>2</sup> ) among data of five ratio-based body measurements for (**a**) slow (**b**) normal and (**c**) fast walk speeds. HW1, ratio of the full-body height to the full-body width; HW2, ratio of the full-body height to the mid-body width; HW3, ratio of the full-body height to the lower-body width; A1, ratio of the apparent body area to the full-body area; and A2, ratio of the area between the legs to the full-body area. Weak correlation: 0.10–0.39, moderate correlation: 0.40–0.69, strong correlation: 0.70–0.89 and very strong correlation: 0.90–1.00.

Figure 6 presents the results from comparisons of the mean(±SD) classification accuracy and mean(±SD) training time for biLSTM-based walking speed classification using walk speed patterns established using one, two, three four and five ratio-based body measurements. Details of the mean(±SD) classification accuracy and mean(±SD) training time are provided given in the Supplementary Material (Tables S1–S5). Walking speed classification using walk speed patterns established using five ratio-based body measurements achieved a mean(±SD) classification accuracy of 88.05(±8.85)% (Figure 6 and Table S1 (result from our previous study [13])) and the walk speed patterns established using three ratio-based body measurements combinations such as (HW1, HW2, A2) and (HW2, HW3, A2) achieved a mean classification accuracy that was greater than that achieved with walk speed patterns established with five ratio-based body measurements (Figure 6 and Table S3). More specifically, two combinations of three ratio-based body measurements, namely, (HW1, HW2, A2) and (HW2, HW3, A2), achieved mean(±SD) classification accuracies of 92.7(±8.01)% and 92.79(±7.8)%, respectively (Figure 6 and Table S3). In addition, the walk speed patterns established using other combinations of three ratio-based body measurements, namely, (A1, A2, HW3), (A1, A2, HW2), (HW1, HW3, A2), (HW1, HW3, A1), (HW1, HW2, A1) and (HW1, HW2, HW3), and three combinations of four ratio-based body measurements, namely, (HW1, HW2, A1, A2), (HW1, HW2, HW3, A1) and (HW1, HW2, HW3, A2), achieved mean classification accuracies that were very close (i.e., within 2% less) to the mean classification accuracy achieved with the walk speed patterns established with five ratio-based body measurements (Figure 6 and Tables S2 and S3). In contrast, the mean accuracies achieved for walking speed classification using walk speed patterns established with combinations of one and two ratio-based body measurements were less than 70% and

74%, respectively (Figure 6 and Tables S4 and S5). These results clearly show that the walk speed patterns established with combinations of three ratio-based body measurements achieved better performance in terms of the mean(±SD) classification accuracy than the walk speed patterns established with five ratio-based body measurements. Moreover, the mean training time for walking speed classification using walk speed patterns established with combinations of three ratio-based body measurements reduced to approximately 14 to 15 min (Figure 6 and Table S3) compared with the mean training time of 17.43 min for walking speed classification using walk speed patterns established with the combination of five ratio-based body measurements [Figure 6 and Table S1 (result from our previous published study [13])].

**Figure 6.** Mean ± SD classification accuracy and mean ± SD training time for biLSTM-based walking speed classification using walk speed patterns based on by one, two, three, four and five ratio-based body measurements. HW1, ratio of the full-body height to the full-body width; HW2, ratio of the full-body height to the mid-body width; HW3, ratio of the full-body height to the lower-body width; A1, ratio of the apparent body area to the full-body area; and A2, ratio of the area between the legs to the full-body area.

#### **4. Discussion**

The primary objective of this study was to determine the optimal ratio-based body measurement combination needed to present potential information that can define and predict walk patterns in terms of speed with a high classification accuracy. To accomplish the goal, this study adopted two commonly used methods of useful and optimal selection of input features (e.g., ratio-based body measurements). First, this study analysed the correlations among five ratio-based body measurements to comprehend relationships among these body measurements in slow, normal and fast walking speed conditions. Second, the performance (in terms of the mean ± SD classification accuracy and mean ± SD training time) of a biLSTM deep learning-based walking speed classification model was

evaluated using walking speed patterns created using all possible combination of one, two, three and four out of five ratio-based body measurements. The combination with the fewest ratio-based body measurements (i.e., less than five ratio-based body measurements) for the establishment of walk patterns was deemed optimal if it yielded a mean ± SD classification accuracy higher than or within 2% less [23,24] of the mean ± SD classification accuracy obtained in our previous study [13], and the ratio-based body measurements used for defining the walk pattern exhibited low correlations among them.

This study utilized data for five ratio-based body measurements for the correlation analysis and biLSTM deep learning-based walking speed classification. Based on the correlation analysis and biLSTM deep learning-based walking speed classification models, this study discovered that combinations of three ratio-based body measurements with minimal correlation among them yielded the highest accuracy in terms of the mean ± SD classification accuracy for walking speed classification using the biLSTM deep learning-based model. More specifically, HW1 exhibits low correlations with HW2 and A2, and thus, the combination of these three ratio-based body measurements achieved classification accuracy of 92.7(±8.01)% (Figures 5 and 6 and Table S3). HW2 has low correlations with HW3 and A2, and the combination of these three ratio-based body measurements achieved a classification accuracy of 92.79(±7.8)% (Figures 5 and 6 and Table S3). Furthermore, the mean ± SD classification accuracies achieved with the combinations of one and two ratio-based body measurements with low correlation among them are markedly lower than the mean ± SD classification accuracy achieved in our previous study [13] (Figure 6 and Tables S4 and S5). Moreover, the other combinations of ratio-based body measurements achieved classification accuracies within 2% of the mean ± SD classification accuracy achieved in our previous study [13], and the body measurements in these combinations generally exhibited moderate to strong correlations between them (Figures 5 and 6 and Tables S1–S3). This finding implies that walking speed patterns identified from few ratio-based body measurements can produce the best performance for deep learning-based classification of walking speed if the correlation between the ratio-based body measurements is low. Additionally, full body image sequences are necessary for more accurate classification, since ratio-based body measurements (i.e., HW1, HW2 and HW3) which resulted in excellent classification accuracy required full-body height.

This study is significant in several contexts. First, video image sequences display apparent body measurements rather than real physiological dimensions of the human body [12,15,16]. It is thus crucial to examine different walking individual-to-camera distance independent body measurements (i.e., ratio-based body measurements) that can be found from video image sequences and to investigate the interactions between ratio-based body measurements in order to identify the optimal body measurements for defining and predicting a walk pattern in terms of speed [12,13]. By performing a correlation analysis and a rigorous deep learning-based assessment, the current study evaluated combinations of three out of five potential ratio-based body measurements. Combinations of these three ratio-based body measurements provided information to estimate walk patterns in terms of speed with classification accuracy greater than 92%, which is better than the results achieved in previous studies 88.57% [12], 88.05% [13]. In addition, the previous study [12] trained the model with a multiclass setting (i.e., all three types of walking speed patterns) and tested the models using a single-class setting (i.e., any one of the three walking speed patterns) while the current study used a multiclass setting as well as multiple runs for the training, validation and testing of the model, which is beneficial for achieving accurate classification accuracy and building a successful model [40,41]. It is difficult to compare our results with the previously published study [14], which used body-worn clothing for body measurement extraction, as the study only proposed extraction methods and did not experiment for classification related tasks. Additionally, the data collection procedure, experimental design, and participants' demographic characteristics of the previous study [14] are completely different from the current study. Second, earlier studies [17,18], which claim that using high-dimensional input features (such as several ratio-based body

measurements) may hinder the performance of a deep learning-based architecture obtained with redundant data, support the results from the current study. In addition, previous studies [17,18], which assert that the highest performance of a deep learning-based architecture could be attained if the best data that provide information, are in agreement with the results from the current study. Furthermore, in future clinicians may utilise this method for routine gait monitoring in healthcare and old-age homes as it can be used to identify the walking speed in an indoor environment with improved classification accuracy [42]. Current patient monitoring systems include implanted devices and wearable sensors that might require invasive procedures and body attachment which are difficult and often unpleasant for patients. Therefore, remote patient monitoring using existing surveillance cameras could be a more viable option to constant observation of patient mobility. In addition, human resources and battery life of traditional sensors are critical for long term patient monitoring. As such, camera-based patient mobility monitoring might be more cost effective while alleviating the burden on resources in clinical settings [43].

Although the current study has a lot of potential for selecting the optimal ratio-based body measurements for creating walk patterns that are useful for accomplishing walking speed classification using a deep learning-based architecture with the highest classification accuracy, the study only evaluated healthy individuals. Experiments that include a gaitimpaired population will be considered in the future. Additionally, this study recruited participants with a wide range of ages (15 to 65). However, the walk patterns of the participants might change according to their age [44,45]. Walk speed classification across different aged participants could be another research topic of interest in future. Additionally, this study solely used area-based and height-to-width ratio-based body measurements for the classification of walking speeds. Future studies will involve estimating additional spatiotemporal parameters, such as stride and step length, joint angles, velocity and acceleration, to gain a deeper understanding of the health of individuals and to classify typical and atypical gait patterns. Moreover, only the biLSTM approach was used in this study for the classification task. Future research will utilise more cutting-edge classification algorithms to reach the best classification accuracy.

#### **5. Conclusions**

In summary, this study found that combinations of three ratio-based body measurements extracted from lateral-view 2D images of marker-free walking individuals can potentially define and predict walk patterns in terms of speed with classification accuracies greater than 92% using a biLSTM. The excellent findings of this study support the optimal application of ratio-based body measurement data that change with variations in the walking speeds, form periodic or quasi-periodic patterns, and, more importantly, can be extracted from marker-free conventional camera images to classify walking speeds with high classification accuracy using the contemporary deep learning method. Additionally, the remarkable results obtained in this study confirm that the use of high-dimensional input features, such as multiple ratio-based body measurements, hinders the performance of deep learning-based architectures if the data are redundant. Furthermore, if the data that yield the best information are employed, the deep learning-based architecture would exhibit peak performance. This walking speed classification method using optimal data is a simple yet effective technique with a lot of potential for use in clinical settings and elderly care facilities.

**Supplementary Materials:** The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/bioengineering9110715/s1, Table S1: classification accuracies for walking speed classification using walk pattern established with five RBBMs in our previous study, Table S2: classification accuracies for walking speed classification using walk pattern established with four RBBMs, Table S3: classification accuracies for walking speed classification using walk pattern established with three RBBMs, Table S4 classification accuracies for walking speed classification using walk pattern established with two RBBMs, Table S5 classification accuracies

for walking speed classification using walk pattern established with one RBBMs. RBBMs refers to ratio-based body measurements.

**Author Contributions:** Guarantor: T.S., M.F.R., D.I. and N.U.A. are responsible for the entirety of the work and the final decision to submit the manuscript; study concept and design: all authors; data acquisition, processing, and analysis: T.S. and M.F.R.; critical review and interpretation of data: K.H.G., S.M.R., M.A.A., M.A.A.M., D.I., O.A. and M.A.; drafting of the manuscript: T.S. and M.F.R.; critical revision of the manuscript: all authors; obtaining funding: O.A. and M.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data generated and/or analyses for the current study are available from the following publicly available databases: Osaka University-Institute of Scientific and Industrial research (OU-ISIR) Dataset 'A': (www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitTM.html, access on 23 September 2022).

**Acknowledgments:** The authors extend their appreciation to the College of Applied Medical Sciences Research Centre and the Deanship of Scientific Research at King Saud University for funding this research.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

