Next Article in Journal
VW-SC3D: A Sparse 3D CNN-Based Spatial–Temporal Network with View Weighting for Skeleton-Based Action Recognition
Previous Article in Journal
A Framework for Smart Home System with Voice Control Using NLP Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Dynamic and Multi-View Gait-Based Gender Classification Using Lower-Body Joints

1
Department of Computer Science & IT, University of Malakand, Chakdara 18800, Pakistan
2
Department of Software Engineering, University of Malakand, Chakdara 18800, Pakistan
3
Department of Statistics, Abdul Wali Khan University Mardan, Mardan 23200, Pakistan
4
Department of Software Engineering, Mirpur University of Science and Technology, Mirpur 10250, Pakistan
5
Center of Research, Faculty of Engineering, Future University in Egypt, New Cairo 11835, Egypt
6
Faculty of Computers and Artificial Intelligence, Cairo University, Giza 12613, Egypt
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(1), 118; https://doi.org/10.3390/electronics12010118
Submission received: 5 October 2022 / Revised: 6 December 2022 / Accepted: 21 December 2022 / Published: 27 December 2022

Abstract

:
Gender classification based on gait is a challenging problem because humans may walk in different directions at different speeds and with varying gait patterns. The majority of investigations in the literature relied on gender-specific joints, whereas the comparison of the lower-body joints in the literature received little attention. When considering the lower-body joints, it is important to identify the gender of a person based on his or her walking style using the Kinect Sensor. In this paper, a logistic-regression-based model for gender classification using lower-body joints is proposed. The proposed approach is divided into several parts, including feature extraction, gait feature selection, and human gender classification. Different joints’ (3-dimensional) features were extracted using the Kinect Sensor. To select a significant joint, a variety of statistical techniques were used, including Cronbach’s alpha, correlation, T-test, and ANOVA techniques. The average result from the Coronbach’s alpha approach was 99.74%, which shows the reliability of the lower-body joints in gender classification. Similarly, the correlation data show a significant difference between the joints of males and females during gait. As the p-value for each of the lower-body joints is zero and less than 1%, the T-test and ANOVA techniques demonstrated that all nine joints are statistically significant for gender classification. Finally, the binary logistic regression model was implemented to classify the gender based on the selected features. The experiments in a real situation involved one hundred and twenty (120) individuals. The suggested method correctly classified gender using 3D data captured from lower-body joints in real-time using the Kinect Sensor with 98.3% accuracy. The proposed method outperformed the existing image-based gender classification systems.

1. Introduction

In recent decades, the use of automatic identification systems has increased around the world, particularly in high-security zones. Usually, banks and airports use biometric technology for a person’s biological or behavioural features to confirm their identity [1]. Physical biometrics evaluate features such as the iris, face, fingerprints, DNA, and hand geometry, whereas behavioural biometrics look at traits such as voice, signature, and gait [2]. Everyone has distinctive features that can be exploited to identify. The science of biometrics is based on these unique features to confirm and authenticate a person’s identification. One of the few biometric technologies, known as gait recognition, attempts to identify people at a distance based on their gait [3]. Gait is a desirable biometric method in security and surveillance systems because it can identify people without eye contact and even from a distance [4]. Systems for recognising gaits are based on video sequence analysis. The Kinect Sensor, created by Microsoft, facilitates this type of analysis. Microsoft produces the Xbox 360 game controller technology known as Kinect. With this equipment, a user can play video games by only moving his body, without the aid of any extra sensors. A colour camera, 3D depth sensor, IR laser, and microphones make up the Kinect Sensor. The full-body 3D motion of the human skeleton may be captured using the Kinect SDK for Windows in combination with a variety of APIs, which is a helpful tool for researchers seeking to develop an accurate gait detection application [5].
In general, researchers analyse video sequences captured by ordinary cameras to suggest a gait recognition algorithm, but they face challenges with procedures such as gait cycle estimates and feature extraction. While the Kinect Sensor can offer these features more precisely and supply the body’s skeleton information, authors in [6,7] identified features by manually placing dot points on the person in each frame.
Several researchers employed the Kinect Sensor for gender identification, using different numbers of biometric features. For example, Pries et al. [3] used 13 biometric features in their 2012 study on gait. Alharbi et al. [8] used nine joints in their 2019 study on the skeleton. For gender classification, Ball et al. [4] utilised 18 joints, whereas Sinha et al. [9] produced 14 joints. Similarly, Chi Xu et al. [10] used 20 joints for gender classification. In the existing approaches, the accuracy of gait-based gender classification mainly depends on 2D images with a limited datasets, which is the main motivation for our research studies.

1.1. Research Gap and Motivation

  • While gender has been identified during gait by existing gait recognition methods, gender classification based on a comparison of the lower-body joints of males and females has been overlooked.
  • Recent studies [3,8,10,11,12,13,14] have classified gender based on gait; however, the accuracy of this classification mostly relies on 2D camera images.
  • Numerous studies have identified the gender of walkers [15,16], but their analyses of the data used relatively small datasets, adversely affecting the accuracy of their systems.

1.2. Main Contributions

The main advances and contributions of the proposed work are as follows:
  • For the purpose of classifying gender, the authors developed their own dataset based on 3D features.
  • Different statistical strategies are employed to verify that the lower-body joints of each gender are appropriate for gender classification, including Cronbach’s alpha, correlation, the T-test, and one-way ANOVA.
  • Using a logistic-regression-based machine learning model for gender classification in real time.
  • The proposed gait-based system is compared with the most recent state-of-the-art approaches using a number of parameters, including accuracy and joint set.
In short, little attention has been given to gender classification based on 3D images using lower-body joints. This study proposes a machine learning technique to assess the contributions of joints and differences when walking. In order to categorise gender, we compare and assess each lower-body joint used during walking in both males and females. This research paper is organised as follows. The literature review is presented in Section 2, the proposed study is described in Section 3, and the experiments and findings are described in Section 5 and Section 6. Section 7 focuses on conclusions and future work.

2. Literature Review

Gait analysis techniques are divided into model-based approaches [14,17], appearance-based approaches [18,19,20,21], and deep-learning-based approaches [2,22,23]. Gait-based systems can be categorised into three groups based on these techniques: model-based, appearance-based, and deep-learning-based systems. The model-based techniques are designed to achieve gait-based gender identification by analysing the subject’s body. The development of a 3D skeleton model of an individual is a prerequisite for these systems [24,25,26]. The advantage of three-dimensional modelling is that it can adequately handle the variation in viewing angle using the structural models in a gait-based gender recognition and detection system. The results of the system demonstrate robustness to fluctuations in viewing angle because several standardised two dimensional sensors or depth-sensors help to analyse the major skeletal model parameters in 3D. As a result, these techniques achieved 92% recognition accuracy for five different views [25]. In order to handle changes in viewing angle, 3D-dense models created from several 2D cameras are essential [27]. These algorithms are able to adapt the attributes they have collected from a model to a certain perspective, achieving a recognition accuracy of 75% across twelve various perspectives of twenty individuals. The limitations of depth-sensing cameras and the need for camera calibration before it can be used are some of the drawbacks of 3D model-based systems. These methods thus lose their effectiveness in the natural world.
According to a method presented by Yoo et al. [28], the gait pattern is configured as a 2-dimensional link coordinate system and should be applied as a unique feature for recognition. The body’s gait pattern was divided by Lee et al. [29] into seven parts, each of which was shown to have an elliptical form. Additionally, each ellipse’s minor axis, major axis, and orientation were calculated as a separate task. Using the pose-based voting (PBV) technique, Isaac et al. [30] created a recognition model that does not require a full gait cycle. Instead of the well-known support vector machine (SVM), the authors employed linear discriminant analysis (LDA) for gender classification. Lee et al. [31] presented a gender classification system using random forest (RF) and support vector machine (SVM) for best feature selection. To highlight how gender variations in gait parameters are affected, researchers looked at temporal, kinematic, and muscular activity. Model-based systems typically struggle to handle input images with lower resolutions, but they often make computations more difficult.
For gender recognition and classification, an appearance-based technique primarily uses spatiotemporal data captured from a person’s gait patterns. Most of the time, the appearance-based method uses a subject’s entire gait cycle to determine gender. Gait energy image (GEI) is the most well-known appearance-based approach. A gender classification method that can handle the arbitrary movement of subjects was introduced by Lu et al. [32]. The gait sequences of the study subjects are compared and assigned to a particular cluster based on the comparison result. Finally, a gait-based feature called a cluster-based averaged gait picture is produced. To extract the gait-based features from GEI, Liu et al. [33] presented a Fourier transform.
Another study [34] introduced a technique for gender classification that involves extracting the silhouette from each image and calculating gait patterns over a specified period of time to analyse the individual. Later, classification is performed using support vector machines (SVM). The study [35] used the k-NN classification method in conjunction with GEI and active energy imaging (AEI) to accomplish gender detection. Datasets from the publicly available CASIA-B and SOTON-A were used in this study. Bei et al. [14] presented a technique for calculating a subGEI with less frames, as opposed to a whole gait-cycle. They took the multi-subGEIs’ synthetic optical flow and exploited it as a temporal characteristic. They also used a two-stream CNN to integrate the use of GEI and the optical flow information for further gait analysis. For gait description, Hassan et al. [19] employed a wavelet 5/3 lifting method. To create dimensionally different vectors for each walk series, they used PCA. Due to the fact that appearance-based algorithms focus just on gait silhouette, they have less computational complexity and less noise. It is clear from [36] that the identification of participants, estimation of age, and classification of gender can all be done simultaneously. In [37], GEI and CNN were combined; silhouette images were used as the input, and body mass index was calculated as the output. Zhang et al. [22] performed multiple task learning for estimating age and gender classification using deep CNN. Similarly, Sakata et al. [2] introduced a CNN-based system for gender and age based classification. Liu et al. [33] performed gait-based gender recognition using a VGGNet-16 deep convolution model in collaboration with SVM. Similarly, azhar et al. [38] used six joints for gender classification using a Kinect Sensor.
The literature that is currently available indicates that deep-learning-based algorithms for gender classification based on gait have shown better results. However, a very powerful configuration of hardware systems is required for these systems [10]. The suggested method uses 3D images captured from lower-body joints in real time during walking to classify gender based on appearance.
In the existing gait recognition algorithms, the contributions of the lower-body joints have received minimal attention, which has compared only a few joints between males and females during walking. The lower-body joints are compared statistically in this research, and gender is classified in a real-world scenario based on the contribution of the lower-body joints. The suggested system identifies the gender of a waking person moving in any direction with varied speed, gait style, and occluded joints. It also highlights the significance of the lower-body joints in gender classification.

3. Proposed System

In this section, a method for comparing how the lower-body joints in male and female walkers contribute to gender classification is described. For the comparison of male and female lower-body joints, different statistical techniques, such as correlation analysis, t-testing, and ANOVA were used. The robustness of joints was determined using the Chrombach’s alpha approach. A detailed list of lower-body joints and their significance in determining gender is also included here. Moreover, the suggested system model contains two working phases, training and testing. To determine an appropriate model for gender classification, the 3D position of the gender is recorded using a MS Kinect during the training phase. In the testing phase, the gender of a walking individual is finally recorded using a machine learning model based on binary logistic regression and the percentage of the most contributing lower-body joints. In Figure 1, the proposed system model is given.

3.1. Consent of Participants

All of the subjects were normal people who were informed about the nature and purpose of the tests that would be carried out in this study. They were also briefed about the recorded data’s type, nature, and data privacy policy, and how the data would be processed and the research’s prospective outcome. All of the motivated participants were asked to give information, such as name, gender, and age.

3.2. MS Kinect

Microsoft Kinect, a motion-sensing tool, was created for the Xbox 360 gaming platform (see Figure 2). It is an innovative sensor that produces 3D images and can recognise faces and voices. For games and human–robot interaction, it has 3-dimensional depth cameras, an RGB camera, a microphone, and a motorised pivot. Microsoft Kinect plays a key role in recognising gaits.
MS Kinect gathers 3D data from human body joints at 30 frames per second via SDK. The twenty different joints that make up the skeleton are depicted in Figure 3. For every frame, the X, Y, and Z axes of every joint position are displayed. For this study, Microsoft Kinect was used to retrieve the joint positions in X, Y, and Z axes, due to the fact that all X, Y, and Z values are related to the walking direction. Column vectors in the X, Y, and Z axes were extracted to represent the subject’s skeleton joints.

3.3. Dataset Generation

Since no other dataset seemed to have accurate gender information and all 3-dimensional skeleton joint positions, the authors created their own dataset with 373 people. The 3D positions of joints were recorded using Microsot Kinect. A free pose was required of each volunteer as he or she walked in the working area of the sensor. Microsoft Kinect was used to track the three dimensional joint positions of participants who were walking in various directions (see Figure 3). Due to the fact that the 3D sensor and SDK offer a human body skeleton and the ability to track 20 different skeletal positions, the proposed dataset contains X, Y, and Z axes for 20 joints, for a total of 60 properties. We have extracted a set of lower-body features, such as the left hip, right hip, centre hip, right knee, left knee, right ankle, left ankle, right foot, and left foot. Therefore, the proposed system consists of 27 features for 9 body joints, as shown in the red boxes (see Figure 4).

3.4. Pre-Processing of Data

As the Kinect captured several observations of each joint, it is hard to evaluate and compare the data. The authors needed a single number for each individual person in a particular joint that best represented the properties of that joint, so they used average values for each feature. Before statistical analysis, the retrieved data were pre-processed. The joint mean, X, Y, and Z coordinates’ cluster mean and cumulative mean were measured during pre-processing. The mean of the joint left-foot can be expressed mathematically as
X ¯ F o o t L = 1 n i = 1 n ( X i )
for i = 1,2,3...n, where X ¯ F o o t L is the mean of the foot’s left joint. The i th observation’s cluster mean is calculated using J th coordinates and K th joints defined as
X = K = 1 p j = 1 3 i = 1 n X i j k 3 p n
where i = 1 , 2 , 3 n , j = 1 , 2 , 3 , and k = 1 , 2 , 3 , , p . Furthermore, the cumulative mean of a person’s X, Y, and Z coordinates is used to estimate a single value for different joints when comparing male and female joints during gait.The cumulative mean for the joint left-foot is statistically computed as indicated in Equation (3).
X = = x ¯ + y ¯ + z ¯ 3
where X is the cluster mean of X ¯ F o o t L , Y ¯ F o o t L and Z ¯ F o o t L in X, Y, and Z coordinates.
X = x H i p L = + x H i p R = + x H i p C = 3
X L B J = X K = + X F = + X A = 3

3.4.1. Variance of Joints

The authors calculated variances and standard deviations to reflect differences in joint parameters during walking between males and females. The variance of all joints was measured on all 3-dimensional axes; for example, the variance of the left-foot joint was calculated by applying Equation (6).
σ X F o o t L 2 = 1 N i = 1 N ( X i μ X ) 2
where N denotes the population size or number of observations, σ 2 the population variance, X i the covariate, and μ X the aggregate mean in the X, Y, and Z directions. Similarly, Equation (6) can be repeated for Y and Z directions.

3.4.2. Combined Variance of Joints

After measuring the variance of every joint, Equation (8) calculates the aggregated variance of all joints.
δ P 2 = 1 N ( N 1 δ 1 2 + N 2 δ 2 2 + N 3 δ 3 2 + + N k δ k 2 )
where δ P 2 is the combined variance; δ 1 2 , δ 2 2 , δ 3 2 δ k 2 , are the separates variances of joints. Similarly, N = N 1 + N 2 + N 3 , where N represents size of all joints and N 1 , N 2 , and N 3 are the sizes of X, Y, and Z axes. The combined variances for joints Foot L may be calculated as
δ P F o o t L 2 = 1 N ( N 1 δ X F o o t L 2 + N 2 δ Y F o o t L 2 + N 3 δ Z F o o t L 2 )
The sizes of the X, Y, and Z axes are N 1 , N 2 , and N 3 , respectively. The combined variances of joints F o o t L are δ P F o o t L 2 . The poled variance of other joints’ can be calculated more simply using using Equations (7) and (8).

3.4.3. Joint Coefficient of Variation

The coefficient of variation (C.V) can be used to determine which of the twenty joints are consistent in both males and females. The coefficient of variation is calculated as follows:
C . V ( X F o o t L ) = S . D ( X F o o t L ) m e a n ( X F o o t L ) 100
The standard deviation of the left-foot joint is S.D, and the corresponding mean of the left-foot joint is the mean.

4. Average Variation in Particular Joints of Human during Gait

As shown in Table 1, the mean, standard deviation, and coefficient of variation for various joints of male and female casual walking were computed.
The coefficients of variation for male and female centre-hip and right-hip joints are (11.63 and 12.50) and (11.88 and 12.60), respectively. This indicates that the right-hip and centre-hip joints are consistent for both males and females during walking, but that males are more consistent than females. In comparison to other joints, the C.V for the male left-foot joint is the highest (17.58), and the C.V for the female left-foot joint is also the highest (17.82). This demonstrates that male and female left-foot and right-foot joints vary a lot from person to person.

4.1. Comparison of Male/Female Joints

Different methods, including correlation analysis, the T-test, and the ANOVA technique, were applied to measure the mean differences between all joints in male and female walkers during walking.

4.1.1. Comparison of Lower-Body Joints by Gender Using Correlation Analysis

The Pearson correlation technique was proposed to find the relationships between various joints and various directions while walking. Pearson correlation analysis is a statistical approach for determining how two variables are correlated. The correlation is always between −1 and +1; a correlation close to +1 implies a strong and direct relationship between variables, whereas a correlation close to −1 indicates a strong and inverse relationship. When the correlation is close to zero, it means that the variables have a weak relationship. The Pearson correlation is stated as follows:
r = ( n x y x y ) [ ( n x 2 ( x ) 2 ) ( n y 2 ( y ) 2 ) ]
We examined the correlations between numerous joints and directions to determine the differences between male and female joints when walking. This showed significant differences between male and female joints.
Table 2 shows that there is a considerable correlation for numerous joints rather than a few joints, and that there are strong and weak relationships for various joints during gait. I.e., during walking, males have an 85% relationship between X and Y directions at the centre-hip joint, whereas females have a 5% correlation. At the centre of the hip, males have an 88 percent correlation with Y and Z directions, whereas females have a 20% one. For the hip centre, males have a 90 percent correlation with X and Z directions, whereas females have an 81 percent correlation with X and Z directions. The correlation between X and Z directions at the knee right for males is 91%, whereas for females it is 79%; the correlation between X and Z directions at the right ankle for males is 93%, whereas for females it is 82 percent, and the correlation between X and Z directions at the left foot for males is 92%, whereas for females it is 88 percent. During walking, the relationship is weak and negligible in a few joints. Females have 1% and 2% correlations between X and Y directions and the left hip, respectively, which denote weak and insignificant relationships. The male–female correlation is 82 percent for the same joint (left hip), which is a strong and significant relationship.

4.1.2. T-Test Analysis for the Comparison of Lower-Body Joints by Gender

A two-sample t-test is analysed to evaluate the mean differences between male and female joints during walking. For example, there is a significant difference between male and female walking at the centre hip joint. As humans may walk in three directions (x, y, and z), we calculated the means of x, y, and z directions independently and combined them. The t-test was used to compare the combined means. Consider if a t-test with equal variances or one with unequal variances yields the same result. Table 3 shows the results of a two-sample t-test. The following is the formulation of our hypothesis.
H 0 : For the left foot, there is no difference between male and female joints when walking. I.e.,
μ F o o t L ( M ) = μ F o o t L ( F )
H 1 : During walking, there is a considerable difference between males and females at the left foot. I.e.,
μ F o o t L ( M ) μ F o o t L ( F )
where μ H i p L ( M ) is the mean for the male left-foot joint and μ H i p L ( F ) is the mean for the female left-foot joint. The t-test and ANOVA approach were applied to compare each joint of the male and female pair simultaneously. For the t-test, we assumed that the variances are the same for both male and female joint contributions. The T-Test is defined as follows:
T = ( X ¯ 1 i X ¯ 2 i ) s p 1 n 1 + 1 n 2
where X ¯ 1 represents the pooled arithmetic mean of each male joint and X ¯ 2 represents the pooled arithmetic mean of each female joint. A male’s size is represented by n 1 , whereas a female’s size is represented by n 2 . The degrees of freedom for a t-test are represented mathematically as
ν = n 1 + n 2 2
The degrees of freedom are represented by ν . The combined standard deviation for both male and female joints is S p , which is defined as
S p = ( n 1 1 ) s 1 2 + ( n 2 1 ) s 2 2 ) n 1 + n 2 2
S 1 = x i ( M ) x ¯ ( M ) 2 n 1 1
S 2 = x i ( F ) x ¯ ( F ) 2 n 2 1
where s 1 2 and s 2 2 are the male and female joints’ variance, respectively.
Table 3 shows that all the lower-body joints are significantly different at the 1% level, since their p-values are zero or less than 1%. This implies that during a casual walk, male and female lower-body joints differ significantly. Using the t-test, it was discovered that during gait, male and female lower-body joints follow different rules.

5. Feature Selection

To compute the Cronbach’s alpha value, we first calculated the adjusted mean and adjusted standard deviation. We can use Cronbach’s alpha to measure a feature’s reliability or how closely a group of items are related.
The Cronbach’s alpha yielded an average result of 99.74%, as shown in Table 4. This indicates that the data can be analysed very effectively.

6. Classification of Gender Using Lower-Body Joints

To classify whether a walking person is female or male, the authors used a binary logistic regression model based on lower-body joints, as our response variable is binary—i.e., the gender of a walking person is either female or male.

Logistic Regression Based Machine Learning Model

Using previous observations of a dataset, the statistical analytic technique of logistic regression can be used to predict a binary outcome, such as yes or no. A logistic regression model looks at the correlation between one or more current independent elements in order to predict a dependent variable. Therefore, the response variable in the proposed model is binary, with two options: female or male. The model aims know the gender of a person walking in terms of a probability, and then to compare expected probabilities to the predefined threshold. As 0.5 is the predefined threshold value for a binary logistic regression model, if P 0 > 0.5 , the walking person is a assumed to be a woman, and if P 1 > 0.5 , the system decides on male. A classification table determines which joints are most important in determining whether people are females or males. The classification table was made using estimated gender probabilities compared to a threshold value. The mathematical definition of a binary logistic regression model is:
P 1 = e ( α + H i p + L B J ) 1 + e ( α + H i p + L B J )
P 0 = 1 1 + e ( α + H i p + L B J )
The intercept is α , and LBJ stands for lower-body joints. The expected probability of a walking person being male is P 1 , and the expected probability of a walking person being female is P 0 . Although twenty different joints can be detected by the Kinect (see Figure 3), we were interested in the lower-body joints; therefore, we found that only nine joints strongly indicate whether a walking person is female or male. The authors used repeated modelling to test all the lower-body joints and accurately identify the gender. If the value of a walking subject exceeds P 1 > , the system will determine that the subject is male; if the probability exceeds P 0 > , the system will determine that the subject is female. A best-fit model must have a threshold value of 0.5.

7. Experimental Study

The experimental study was based on a self-generated dataset that includes 3D gait features for 373 subjects. As we are interested in classifying the genders of people of various ages, we included subjects ranging from 7 to 70 years-old. This dataset has a variety of advantages. Compared to the previous self-created dataset, this one is bigger and provides sufficient subject information for practically all studies. In addition, the large age range of 7 to 70 years-old offers an almost perfect dataset for testing. The dataset also includes 3D joint positions, whereas the majority of datasets that are currently accessible only include 2-dimensional joint data. Based on these features, the dataset worked perfectly in our situation as a performance test to confirm the accuracy of the suggested gait-based gender classification algorithm. In order to assess the effectiveness of various joints used for gender classification, we conducted experimental testing statistically and in a real-time scenario.

7.1. Classification Accuracy Using Statistical Tools

The results of the best-fit binary logistic regression model for gender classification are shown in Table 5: the real genders (male = 1, female = 0), predicted genders, and the error term for the best fit-model. In order to develop a regression model, we considered a total of 68 respondents. The statistical results show the error of only one male being recorded as 1; there were no other errors. The response variable in a binary logistic regression model that fits is gender, and the predictors are the lower-body joint cluster means. Lower-body joints cluster: left knee, right knee, right ankle, left ankle, left foot, and right foot. Hip cluster: left hip, hip centre, and right hip. The issue of multicollinearity and the variance of the error term was minimised by clustering the means of the lower-body joints. The issue of multicollinearity may arise in overdefined models (many independent variables) and result in high variation. The cluster means resulted in the best fitted model for gender classification during walking by reducing the number of predictors (joints) from nine to just two joints (hip and lower-body joints).

Results of the Best Fitted Model Based on ANOVA

For each factor, Table 6 displays the source of variation (SOV), degree of freedom (DF), sum of squares (SS), mean squares (MS), chi-square statistic value, and p-value. At the 1% level of significance, the constant terms C A H I P (cluster mean of HIP joints) and C A F A K (cluster mean of knee, ankle, and foot joints) are highly significant. According to the results, it is possible to classify people into different genders using the cluster mean of the joints left hip, hip centre, and right hip; and the cluster mean of the left knee, right knee, left ankle, right ankle, left foot, and right foot.
The coefficients and standard error of the variables for the best-fit binary logistic regression model are shown in Table 7, which displays the results of the coefficients and standard error for fitting the binary logistic regression model. Based on a person’s gait parameters, these values were used to determine gender regardless of whether they were male or female.

7.2. Protocol and Task for Real-Time Experiment

We used Microsoft Kinect for testing the validity of the proposed system. Multiple records for each person were taken in order to evaluate the accuracy of Microsoft Kinect. The measurements were calculated over a range of distances. The maximum distance measured was 3 m (see Figure 5). The MS Kinect had a height and an angle of 1.3 m and 0 degrees at 30 frames per second (frames per second). The IR depth sensors detect the reflected IR beam from the IR emitter in order to obtain a depth image. The distance between the sensor and the target is used to calculate depth information. For the MS Kinect to be able to detect 3D joints, the lower-body joints must be visible. Therefore, 3 m is the maximum distance allowed between a walking participant and the Microsoft Kinect. The distance between the subject and MS Kinect is unaffected by the detected walking person.
The route and direction that were taken during the experiment were first demonstrated to the participants. They were free to move around the Kinect detection area in any direction and at any distance. Due to the fact that the features were extracted using the averages of all frames, there was no time restriction. At least one hundred and twenty (120) volunteers took part in the experiment. The task involved the individuals walking around the Microsoft Kinect inside the Kinect recognition area, as depicted in Figure 6. The authors developed a system on a PC with an Intel Core i5, a 2.0 GHz processor, and 2 GB of main memory. Through a USB connector, the computer was connected to the Kinect Sensor. With the assistance of the Windows 7 64-bit operating system and the Kinect SDK V1.7 software, we built the system. Microsoft Visual Studio 17 and the .NET Framework 4.0 were used as programming IDEs for this project.

7.3. Classification Accuracy in Real-Time Scenario

In order to completely analyse our method, we have evaluated a number of its components and factors. We evaluated how effective joint combinations are. Our system’s accuracy for several sets of joints, including 10, 13, 15, 17, 19, and LBJ, was 92.4, 93.3, 94.1, 95.0, 94.9, or 98.3 percent, respectively. When the lower body was used in our suggested approach, the accuracy was 98.3 percent. In order to select the best model for identifying gender, we looked at a variety of walking sequence combinations for training and testing. We kept a minimum gap of two to three joints between each set. We discovered a natural overall difference by contrasting every joint in the male and female. The entire set of lower-body joints significantly increased the classification. The sensitivity of our classifier is shown by the fact that the variance is minimal and the expansion is minimal. We used an accuracy definition as a measure to test classification accuracy:
a c c u r a c y = T P + T N T P + F P + T N + F N
Our ability to precisely classify each type is summarised in Table 8. The proposed technique achieved the same accuracy for males and females, but comparatively, it achieved better accuracy than other systems.
The averages of all the human joints collectively made up the covariates. The dependent variable was the gender of the person walking (y = 1, 0). Using the G* statistic of 71.463, the DF of 20, and the p-value of 0.000, we also determined that all of the coefficients of the best-fit logistic regression model are equal to zero. We also calculated each coefficient parameter’s statistical significance at the 5% level. The confusion matrix classified the predicted values of the best-fit binary logistic regression model (see Figure 7): 49 out of 50 males and 49 out of 30 females were correctly predicted by the model. Only one man and one lady were misclassified. The algorithm properly predicts gender status 98.3 percent of the time when threshold values of 0.5 are compared.
With higher standard errors, accuracy will decrease, but with smaller standard errors, accuracy will increase. These joints offer accurate gender identification, since their standard errors are low. Additionally, the p-value for each joint is less than 0.05, indicating that the joints have statistical significance for gender prediction.

8. Comparison with Existing Work

The authors compared the proposed gender classification strategy with the existing methods using feature extraction techniques, feature sets (joints), and test individuals. The system achieved the best recognition rate (see Table 9). Ball et al. [4], exploited 18 joints in their previous algorithms to extract information from 2D data. With just four walking individuals, the system achieved a recognition accuracy of 43.60%. Similarly to this, Sinha et al. [9] tested five different numbers of walking participants and employed 14 distinct joints for the extraction of features from 2D images, achieving accuracy of 85.00%. Chi Xu et al. [10] proposed a gait-recognition approach in 2021 that has an accuracy level of 94.27 percent by extracting gait features from all body joints with the help of 20 subjects. Similarly, Azhar et al. [38] used limited joints for gender classification and achieved 97.50% accuracy. In comparison to the current approaches, it suggests that our suggested solution is good in terms of accuracy and performance. By analysing 2D pictures and nine walking participants, Alharbi et al. [8] successfully retrieved gait features from twenty distinct body joints using the existing approaches with 97.00% accuracy. In 2022, Azhar et al. [38] tested eighty (80) people and used a smaller set of gait parameters (six joints). They achieved better accuracy: 97.50%. The current approaches in the field, which use a variety of tools and devices for gait recognition, have faults that have an impact on how well they work, such as the first one, which has a high computational cost caused by extensive features, the small number of individuals tested, the low recognition rate caused by the use of binary images for classification, and the lack of real-time gait recognition capabilities caused by using a 2D gait recognition system. No one has ever classified gender using the lower-body joints with our approach. The use of lower-body joint-features makes the suggested method efficient in terms of recognition accuracy.
Furthermore, a comparison with other classification techniques such as the DCT based method, the CNN-based method, and support vector machines (SVM) has been performed. The proposed method outperformed the existing methods by providing 98.3% accuracy. The results are shown in Table 10.

Discussion

According to statistical analysis, there are significant differences in the coefficients of variation for lower-body joints during gait for both males and females. The male left-foot joint’s coefficient of variation was 17.58, which indicates high variability when compared to the female joint. As a result, individual differences clearly exist in how well men execute walking at the left foot. The coefficient of variation for the left-foot joint of women was 17.01, which indicates less variability. In contrast to males, female movement at the left-foot joint is consistent. Male performance is more constant than female performance at the right-foot joint, as indicated by the coefficients of variations for male and females at this joint being 16.34 and 17.82, respectively. When applying the T-test to evaluate all lower-body joints, we can see a significant difference between male and female joints. P-values for every joint are zero (see Table 3), indicating that the lower-body joints of men and women have different functions during walking. According to correlation analysis, both males and females move significantly in the X and Z coordinates because the correlations with these coordinates are large and very significant. For both males and females, the correlations at these coordinates are more than 90%, and the p-values are zero. On the other hand, for the majority of the lower-body joints in both males and females, for X and Z coordinates and Y and Z coordinates, the relationship is weak and less than 50%. Male and female lower-body joints are quite different, according to the correlation study. Results of the t-test revealed a significant difference between male and female lower-body joints. The same result was achieved using the ANOVA technique when comparing all of the joints in the lower bodies of males and females and reducing type-I error. It is shown that all joints in the lower body of either gender are statistically significantly different at the 1% and 5% levels.
According to the analysis of the t-test, ANOVA, and correlation analysis, male and female joints contribute differently to walking.

9. Conclusions and Future Work

In this study, 3D joint data from male and female subjects were analysed for gender detection while walking. It was discovered that the contributions of male lower-body joints during gait differ from those of female lower-body joints through statistical testing techniques such as the t-test, one-way ANOVA, and correlation analysis. The results showed that there are significant levels of individual variation in the left and right feet of male and female joints. For all males, the right knee and the centre of the hip played the same roles during gait. The findings also revealed that when a woman walks, every lower-body joint moves more frequently than when a man walks. During walking, there is a significant fluctuation in the male left-knee joint, indicating that the contribution of the left-knee joint in men differs from person to person. Female right-foot, right-ankle, and left-foot joints show a lot of variation, indicating that each woman uses these differently. Finally, the walking person’s gender was identified with 98.3% accuracy using lower-body joints. Experimental results were compared with deep learning and traditional machine learning methods to show the superior performance of the proposed system. The comparative studies showed that the suggested machine learning approach is robust and outperformed the most recent models. The majority of studies on gender recognition based on gait that have been published used the very small sample sizes, which may have had an effect on gender identification.
We will work harder in their upcoming study to enhance these discoveries by using a much larger database. The suggested method will be used to look into which joints affect running, jumping, and sports more for men and women. Furthermore, these data can be used for simulating human identification.

Author Contributions

Conceptualization, K.U.; Methodology, N.A.G.; Validation, S.U.; Formal analysis, K.U.R.; Resources, A.K.; Writing—review & editing, M.A.; Funding acquisition, S.M.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors certify that they have no conflicting interest with regard to this research.

References

  1. Dikovski, B.; Madjarov, G.; Gjorgjevikj, D. Evaluation of different feature sets for gait recognition using skeletal data from Kinect. In Proceedings of the 2014 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 26–30 May 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1304–1308. [Google Scholar]
  2. Sakata, A.; Takemura, N.; Yagi, Y. Gait-based age estimation using multi-stage convolutional neural network. IPSJ Trans. Comput. Vis. Appl. 2019, 11, 1–10. [Google Scholar] [CrossRef]
  3. Preis, J.; Kessel, M.; Werner, M.; Linnhoff-Popien, C. Gait recognition with kinect. In Proceedings of the 1st International Workshop on Kinect in Pervasive Computing, New Castle, UK, 18–22 June 2012; pp. 1–4. [Google Scholar]
  4. Ball, A.; Rye, D.; Ramos, F.; Velonaki, M. Unsupervised clustering of people from’skeleton’data. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 3–5 March 2012; pp. 225–226. [Google Scholar]
  5. Jana, A. Kinect for Windows SDK programming Guide; Packt Publishing Ltd.: Birmingham, UK, 2012. [Google Scholar]
  6. Singh, J.P.; Jain, S. Person identification based on gait using dynamic body parameters. In Proceedings of the Trendz in Information Sciences & Computing (TISC2010), Chennai, India, 17–19 December 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 248–252. [Google Scholar]
  7. Jhapate, A.K.; Singh, J.P. Gait based human recognition system using single triangle. Int. J. Comput. Sci. Technol. 2011, 2, 128–131. [Google Scholar]
  8. Alharbi, A.; Alharbi, F.; Kamioka, E. Skeleton based gait recognition for long and baggy clothes. In Proceedings of the MATEC Web of Conferences. EDP Sciences, Wellington, New Zealand, 10–12 December 2019; Volume 277, p. 03005. [Google Scholar]
  9. Sinha, A.; Chakravarty, K.; Bhowmick, B. Person identification using skeleton information from kinect. In Proceedings of the International Conference on Advances in Computer-Human Interactions, Las Vegas, NV, USA, 21–26 July 2013; pp. 101–108. [Google Scholar]
  10. Xu, C.; Makihara, Y.; Liao, R.; Niitsuma, H.; Li, X.; Yagi, Y.; Lu, J. Real-time gait-based age estimation and gender classification from a single image. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 3460–3470. [Google Scholar]
  11. Cho, S.H.; Park, J.M.; Kwon, O.Y. Gender differences in three dimensional gait analysis data from 98 healthy Korean adults. Clin. Biomech. 2004, 19, 145–152. [Google Scholar] [CrossRef]
  12. Gehring, D.; Mornieux, G.; Fleischmann, J.; Gollhofer, A. Knee and hip joint biomechanics are gender-specific in runners with high running mileage. Int. J. Sport. Med. 2014, 35, 153–158. [Google Scholar] [CrossRef] [PubMed]
  13. Sakaguchi, M.; Ogawa, H.; Shimizu, N.; Kanehisa, H.; Yanai, T.; Kawakami, Y. Gender differences in hip and ankle joint kinematics on knee abduction during running. Eur. J. Sport Sci. 2014, 14, S302–S309. [Google Scholar] [CrossRef] [PubMed]
  14. Bei, S.; Deng, J.; Zhen, Z.; Shaojing, S. Gender recognition via fused silhouette features based on visual sensors. IEEE Sens. J. 2019, 19, 9496–9503. [Google Scholar] [CrossRef]
  15. Willson, J.D.; Petrowitz, I.; Butler, R.J.; Kernozek, T.W. Male and female gluteal muscle activity and lower extremity kinematics during running. Clin. Biomech. 2012, 27, 1052–1057. [Google Scholar] [CrossRef] [PubMed]
  16. Almonroeder, T.G.; Benson, L.C. Sex differences in lower extremity kinematics and patellofemoral kinetics during running. J. Sport. Sci. 2017, 35, 1575–1581. [Google Scholar] [CrossRef] [PubMed]
  17. Sudha, L.R.; Bhavani, R. Gait based gender identification using statistical pattern classifiers. Int. J. Comput. Appl. 2021, 40, 30–35. [Google Scholar] [CrossRef]
  18. Hu, M.; Wang, Y. A new approach for gender classification based on gait analysis. In Proceedings of the 2009 Fifth International Conference on Image and Graphics, Xi’an, China, 20–23 September 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 869–874. [Google Scholar]
  19. Hassan, O.M.S.; Abdulazeez, A.M.; TİRYAKİ, V.M. Gait-based human gender classification using lifting 5/3 wavelet and principal component analysis. In Proceedings of the 2018 International Conference on Advanced Science and Engineering (ICOASE), Duhok, Iraq, 9–11 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 173–178. [Google Scholar]
  20. Kwon, B.; Lee, S. Joint swing energy for skeleton-based gender classification. IEEE Access 2021, 9, 28334–28348. [Google Scholar] [CrossRef]
  21. Lu, J.; Wang, G.; Huang, T.S. Gait-based gender classification in unconstrained environments. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2012; pp. 3284–3287. [Google Scholar]
  22. Zhang, Y.; Huang, Y.; Wang, L.; Yu, S. A comprehensive study on gait biometrics using a joint CNN-based method. Pattern Recognit. 2019, 93, 228–236. [Google Scholar] [CrossRef]
  23. Marín-Jiménez, M.J.; Castro, F.M.; Guil, N.; De la Torre, F.; Medina-Carnicer, R. Deep multi-task learning for gait-based biometrics. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 106–110. [Google Scholar]
  24. Rouzbeh, S.; Babaei, M. Human gait recognition using body measures and joint angles. Int. J. 2015, 6, 1493–2305. [Google Scholar]
  25. Wang, Y.; Sun, J.; Li, J.; Zhao, D. Gait recognition based on 3D skeleton joints captured by kinect. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 3151–3155. [Google Scholar]
  26. Zhao, G.; Liu, G.; Li, H.; Pietikainen, M. 3D gait recognition using multiple cameras. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Southampton, UK, 10–12 April 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 529–534. [Google Scholar]
  27. Muramatsu, D.; Shiraishi, A.; Makihara, Y.; Yagi, Y. Arbitrary view transformation model for gait person authentication. In Proceedings of the 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 23–27 September 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 85–90. [Google Scholar]
  28. Yoo, J.H.; Hwang, D.; Nixon, M.S. Gender classification in human gait using support vector machine. In International Conference on Advanced Concepts for Intelligent Vision Systems; Springer: Berlin/Heidelberg, Germany, 2005; pp. 138–145. [Google Scholar]
  29. Lee, L.; Grimson, W.E.L. Gait analysis for recognition and classification. In Proceedings of the Fifth IEEE International Conference on Automatic Face Gesture Recognition, Washinton, DC, USA, 20–21 May 2002; IEEE: Piscataway, NJ, USA, 2002; pp. 155–162. [Google Scholar]
  30. Isaac, E.R.; Elias, S.; Rajagopalan, S.; Easwarakumar, K. Multiview gait-based gender classification through pose-based voting. Pattern Recognit. Lett. 2019, 126, 41–50. [Google Scholar] [CrossRef]
  31. Lee, M.; Lee, J.H.; Kim, D.H. Gender recognition using optimal gait feature based on recursive feature elimination in normal walking. Expert Syst. Appl. 2022, 189, 116040. [Google Scholar] [CrossRef]
  32. Lu, J.; Wang, G.; Moulin, P. Human identity and gender recognition from gait sequences with arbitrary walking directions. IEEE Trans. Inf. Forensics Secur. 2013, 9, 51–61. [Google Scholar] [CrossRef]
  33. Liu, Y.q.; Wang, X. Human gait recognition for multiple views. Procedia Eng. 2011, 15, 1832–1836. [Google Scholar] [CrossRef] [Green Version]
  34. Upadhyay, J.; Gonsalves, T. Robust and Lightweight System for Gait-Based Gender Classification toward Viewing Angle Variations. AI 2022, 3, 538–553. [Google Scholar] [CrossRef]
  35. Martín Félez, R.; García Jiménez, V.; Sánchez Garreta, J.S. Gait-based Gender Classification Considering Resampling and Feature Selection. Int. J. Image Graph. 2013, 1, 85–89. [Google Scholar] [CrossRef] [Green Version]
  36. Castro, F.M.; Marín-Jiménez, M.J.; Guil, N.; Perez De La Blanca, N. Automatic learning of gait signatures for people identification. In Proceedings of the International Work-Conference on Artificial Neural Networks, Cadiz, Spain, 14–16 June 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 257–270. [Google Scholar]
  37. Zhang, S.; Wang, Y.; Li, A. Gait-based age estimation with deep convolutional neural network. In Proceedings of the 2019 International Conference on Biometrics (ICB), Crete, Greece, 4–7 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–8. [Google Scholar]
  38. Azhar, M.; Ullah, S.; Raees, M.; Rahman, K.U.; Rehman, I.U. A real-time multi view gait-based automatic gender classification system using kinect sensor. Multimed. Tools Appl. 2022, 1–24. [Google Scholar] [CrossRef]
  39. Kastaniotis, D.; Theodorakopoulos, I.; Economou, G.; Fotopoulos, S. Gait-based gender recognition using pose information for real time applications. In Proceedings of the 2013 18th International Conference on Digital Signal Processing (DSP), Santorini, Greece, 1–3 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–6. [Google Scholar]
  40. Camalan, S.; Sengul, G.; Misra, S.; Maskeliunas, R.; Damaševičius, R. Gender detection using 3d anthropometric measurements by kinect. Metrol. Meas. Syst. 2018, 25, 253–267. [Google Scholar]
Figure 1. Detail model of the proposed system.
Figure 1. Detail model of the proposed system.
Electronics 12 00118 g001
Figure 2. Microsoft Kinect Sensor.
Figure 2. Microsoft Kinect Sensor.
Electronics 12 00118 g002
Figure 3. Walking of different subjects from different directions.
Figure 3. Walking of different subjects from different directions.
Electronics 12 00118 g003
Figure 4. Various human joints.
Figure 4. Various human joints.
Electronics 12 00118 g004
Figure 5. Walking direction of subjects.
Figure 5. Walking direction of subjects.
Electronics 12 00118 g005
Figure 6. Gender classification in real time scenario.
Figure 6. Gender classification in real time scenario.
Electronics 12 00118 g006
Figure 7. Confusion matrix.
Figure 7. Confusion matrix.
Electronics 12 00118 g007
Table 1. The following table shows the mean, standard deviation (S.D), and coefficient of variation (C.V) for various joints of males and females.
Table 1. The following table shows the mean, standard deviation (S.D), and coefficient of variation (C.V) for various joints of males and females.
Male Female
JointsMeanS.DC.VMeanS.DC.V
Hip Center0.99180.115411.631.19060.148912.50
HIP Left0.97380.117212.031.16730.152913.10
HIP Right0.97240.115511.881.17540.148112.60
Knee Left0.81300.141517.401.04580.160215.31
Knee Right0.84860.116013.671.08040.157614.59
Ankle Left0.74000.122416.530.97870.161516.50
Ankle Right0.76070.117215.401.01200.173317.13
Foot Left0.70540.124017.580.94990.161617.01
Foot Right0.72170.118016.340.98270.175117.82
Table 2. Pearson correlations between several joints and directions in a three-dimensional coordinate system (X, Y, and Z).
Table 2. Pearson correlations between several joints and directions in a three-dimensional coordinate system (X, Y, and Z).
JointsX and YY and ZX and ZX and YY and ZX and Z
HIP-C0.85(0.00)0.88(0.00)0.90(0.00)0.15(0.00)0.20(0.00)0.81(0.00)
HIP-L0.82(0.00)0.88(0.00)0.94(0.00)0.01(0.79)0.02(0.68)0.84(0.00)
HIP-R0.82(0.00)0.76(0.00)0.93(0.00)0.24(0.00)0.19(0.00)0.60(0.00)
Knee-L0.35(0.00)0.49(0.00)0.90(0.00)−0.14(0.00)−0.37(0.00)0.90(0.00)
Knee-R0.64(0.00)0.44(0.00)0.91(0.00)−0.20(0.00)−0.13(0.00)0.79(0.00)
Ankle-L−0.49(0.00)−0.27(0.00)0.94(0.00)−0.44(0.00)−0.45(0.00)0.87(0.00)
Ankle-R−0.49(0.00)0.03(0.49)0.93(0.00)−0.47(0.00)−0.25(0.00)0.82(0.00)
Foot-L−0.59(0.00)−0.46(0.00)0.94(0.00)−0.54(0.00)−0.51(0.00)0.88(0.00)
Foot-R−0.53(0.00)0.03(0.44)0.92(0.00)−0.39(0.00)−0.32(0.00)0.88(0.00)
Table 3. Results of a two-sample t-test used to compare male and female joints when walking.
Table 3. Results of a two-sample t-test used to compare male and female joints when walking.
JointsDFPooled S.DT-Valuep-Value
Hip-Center700.12226.610.000
HIP-Left700.1246−6.300.000
Knee-Left700.14326.630.000
Ankle-Left700.1304−7.380.000
Foot-Left700.13157.500.000
HIP-Right700.1220−6.750.000
Knee-Right700.1252−7.530.000
Ankle-Right700.1307−7.820.000
Foot-Right700.1317−8.030.000
Table 4. The results of adjusted mean (Adj.mean), adjusted standard deviation (Adj.SD), and Cronbach’s alpha for reliability of human body joints.
Table 4. The results of adjusted mean (Adj.mean), adjusted standard deviation (Adj.SD), and Cronbach’s alpha for reliability of human body joints.
JointsAdj.MeanAdj.SDCronbach’s Value
X ¯ C H i p C 18.9412.9860.9973
X ¯ C ( H i p L ) 18.9602.9850.9973
X ¯ C ( K L ) 19.1112.9680.9976
X ¯ C ( A L ) 19.1822.9710.9973
X ¯ C ( F L ) 19.2152.9690.9973
X ¯ C ( H i p R ) 18.9592.9850.9972
X ¯ C ( K R ) 19.0742.9760.9973
X ¯ C ( A R ) 19.1582.9670.9974
X ¯ C ( F R ) 19.1942.9640.9974
Table 5. The results of the best-fit binary logistic regression model for gender classification.
Table 5. The results of the best-fit binary logistic regression model for gender classification.
UserG CA HIP CA FAK P 1 Error UserG CA HIP CA FAK P 1 Error
111.06190.8776M03511.04110.8276M0
210.92780.7097M03611.14670.9330M0
311.10360.9136M03710.94760.7316M0
410.92400.7133M03810.86980.6852M0
510.92900.7080M03911.07300.8154M0
610.85960.6621M04011.01760.8280M0
710.87170.6392M04110.97130.7446M0
810.85960.6194M04210.99350.7810M0
910.87810.6560M04310.94140.7348M0
1010.87580.6727M04410.98700.7672M0
1110.97580.7492M04511.00380.7751M0
1210.97980.7752M04611.04610.8371M0
1310.97580.7492M04710.89430.7098M0
1410.94800.6971M04811.07630.8893M0
1510.91780.7039M04911.15860.9620M0
1610.99400.7421M05010.88370.6202M0
1710.98260.7739M05100.98020.8125M0
1810.76970.5388M05200.95720.7641F0
1910.79240.5703M05301.08800.8801F0
2010.76670.5503M05401.26561.0605M0
2110.79900.6231M05500.98650.7771F0
2211.06190.8776M05601.02620.8417F0
2311.04950.8451M05701.06800.9208F0
2411.10440.8752M05801.06940.8988F0
2511.09420.8713M05901.17011.0266F0
2611.06350.8583M06001.17750.9876F0
2711.09370.9009M06101.22061.0195F0
2811.10390.8714M06201.18090.9804F0
2910.90940.6939M06301.15091.0612F0
3011.20380.9622M06401.40041.2517F0
3110.76560.5456M06501.35801.2233F0
3210.99240.7706M06601.38551.2300F0
3310.97200.7673M06701.32171.1747F0
3411.30821.1164F16801.39351.2372F0
Table 6. The results of the analysis of variance (ANOVA).
Table 6. The results of the analysis of variance (ANOVA).
SOADFSSMSChi-Squarep-Value
Constant241.1720.587341.170.000
C A H I P 128.8428.8428.840.000
C A F A K 116.3316.3316.330.000
Error6537.420.5757
Total6778.60
Table 7. The coefficients and standard error of the variables for the best-fit binary logistic regression model.
Table 7. The coefficients and standard error of the variables for the best-fit binary logistic regression model.
TermCoefficientSE Coefficient
Constant−4.416.18
C A H I P 66.226.3
C A F A K −75.026.4
Table 8. Classification accuracy of the proposed system with different set of joints.
Table 8. Classification accuracy of the proposed system with different set of joints.
Gender10 Joints13 Joints15 Joints17 Joints19 JointsLBJ Joints
Male91.693.395.095.096.698.3
Female93.393.393.395.093.398.3
Both92.493.394.195.094.998.3
Table 9. Comparison with current gait recognition techniques.
Table 9. Comparison with current gait recognition techniques.
Related WorkYearExtracted FeaturesJointSubjectAccuracy
Ball et al. [4]20122D images180443.60%
Sinha et al. [9]20132D images140585.00%
Preis et al. [3]20122D images130990.00%
Alharbi et al. [8]20192D images092097.00%
Chi Xu et al. [10]2021Gait Energy Imagesall2094.27%
Azhar et al. [38]20223D positions068097.50%
Proposed system20223D Positions0912098.30%
Table 10. Comparison of the proposed model with other gait-based gender classification systems.
Table 10. Comparison of the proposed model with other gait-based gender classification systems.
Classification MethodDatasetAccuracy
DCT based method [34]OU-MVLP95.33 %
CNN based method [10]GaitSet94.3 %
CNN+SVM [36]TUM-GAID88.9 %
SVM [39]UPCVgait96.67%
SVM [40]Self created96.77%
Proposed systemSelf created98.30%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Azhar, M.; Ullah, S.; Ullah, K.; Rahman, K.U.; Khan, A.; Eldin, S.M.; Ghamry, N.A. Real-Time Dynamic and Multi-View Gait-Based Gender Classification Using Lower-Body Joints. Electronics 2023, 12, 118. https://doi.org/10.3390/electronics12010118

AMA Style

Azhar M, Ullah S, Ullah K, Rahman KU, Khan A, Eldin SM, Ghamry NA. Real-Time Dynamic and Multi-View Gait-Based Gender Classification Using Lower-Body Joints. Electronics. 2023; 12(1):118. https://doi.org/10.3390/electronics12010118

Chicago/Turabian Style

Azhar, Muhammad, Sehat Ullah, Khalil Ullah, Khaliq Ur Rahman, Ahmad Khan, Sayed M. Eldin, and Nivin A. Ghamry. 2023. "Real-Time Dynamic and Multi-View Gait-Based Gender Classification Using Lower-Body Joints" Electronics 12, no. 1: 118. https://doi.org/10.3390/electronics12010118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop