Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = cross-view gait recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1999 KB  
Article
Multi-Biometric Feature Extraction from Multiple Pose Estimation Algorithms for Cross-View Gait Recognition
by Ausrukona Ray, Md. Zasim Uddin, Kamrul Hasan, Zinat Rahman Melody, Prodip Kumar Sarker and Md Atiqur Rahman Ahad
Sensors 2024, 24(23), 7669; https://doi.org/10.3390/s24237669 - 30 Nov 2024
Cited by 5 | Viewed by 1682
Abstract
Gait recognition is a behavioral biometric technique that identifies individuals based on their unique walking patterns, enabling long-distance identification. Traditional gait recognition methods rely on appearance-based approaches that utilize background-subtracted silhouette sequences to extract gait features. While effective and easy to compute, these [...] Read more.
Gait recognition is a behavioral biometric technique that identifies individuals based on their unique walking patterns, enabling long-distance identification. Traditional gait recognition methods rely on appearance-based approaches that utilize background-subtracted silhouette sequences to extract gait features. While effective and easy to compute, these methods are susceptible to variations in clothing, carried objects, and illumination changes, compromising the extraction of discriminative features in real-world applications. In contrast, model-based approaches using skeletal key points offer robustness against these covariates. Advances in human pose estimation (HPE) algorithms using convolutional neural networks (CNNs) have facilitated the extraction of skeletal key points, addressing some challenges of model-based approaches. However, the performance of skeleton-based methods still lags behind that of appearance-based approaches. This paper aims to bridge this performance gap by introducing a multi-biometric framework that extracts features from multiple HPE algorithms for gait recognition, employing feature-level fusion (FLF) and decision-level fusion (DLF) by leveraging a single-source multi-sample technique. We utilized state-of-the-art HPE algorithms, OpenPose, AlphaPose, and HRNet, to generate diverse skeleton data samples from a single source video. Subsequently, we employed a residual graph convolutional network (ResGCN) to extract features from the generated skeleton data. In the FLF approach, the features extracted from ResGCN and applied to the skeleton data samples generated by multiple HPE algorithms are aggregated point-wise for gait recognition, while in the DLF approach, the decisions of ResGCN applied to each skeleton data sample are integrated using majority voting for the final recognition. Our proposed method demonstrated state-of-the-art skeleton-based cross-view gait recognition performance on a popular dataset, CASIA-B. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

17 pages, 4232 KB  
Article
Cross-View Gait Recognition Method Based on Multi-Teacher Joint Knowledge Distillation
by Ruoyu Li, Lijun Yun, Mingxuan Zhang, Yanchen Yang and Feiyan Cheng
Sensors 2023, 23(22), 9289; https://doi.org/10.3390/s23229289 - 20 Nov 2023
Cited by 2 | Viewed by 1588
Abstract
Aiming at challenges such as the high complexity of the network model, the large number of parameters, and the slow speed of training and testing in cross-view gait recognition, this paper proposes a solution: Multi-teacher Joint Knowledge Distillation (MJKD). The algorithm employs multiple [...] Read more.
Aiming at challenges such as the high complexity of the network model, the large number of parameters, and the slow speed of training and testing in cross-view gait recognition, this paper proposes a solution: Multi-teacher Joint Knowledge Distillation (MJKD). The algorithm employs multiple complex teacher models to train gait images from a single view, extracting inter-class relationships that are then weighted and integrated into the set of inter-class relationships. These relationships guide the training of a lightweight student model, improving its gait feature extraction capability and recognition accuracy. To validate the effectiveness of the proposed Multi-teacher Joint Knowledge Distillation (MJKD), the paper performs experiments on the CASIA_B dataset using the ResNet network as the benchmark. The experimental results show that the student model trained by Multi-teacher Joint Knowledge Distillation (MJKD) achieves 98.24% recognition accuracy while significantly reducing the number of parameters and computational cost. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 8161 KB  
Article
Regional Time-Series Coding Network and Multi-View Image Generation Network for Short-Time Gait Recognition
by Wenhao Sun, Guangda Lu, Zhuangzhuang Zhao, Tinghang Guo, Zhuanping Qin and Yu Han
Entropy 2023, 25(6), 837; https://doi.org/10.3390/e25060837 - 23 May 2023
Cited by 4 | Viewed by 2177
Abstract
Gait recognition is one of the important research directions of biometric authentication technology. However, in practical applications, the original gait data is often short, and a long and complete gait video is required for successful recognition. Also, the gait images from different views [...] Read more.
Gait recognition is one of the important research directions of biometric authentication technology. However, in practical applications, the original gait data is often short, and a long and complete gait video is required for successful recognition. Also, the gait images from different views have a great influence on the recognition effect. To address the above problems, we designed a gait data generation network for expanding the cross-view image data required for gait recognition, which provides sufficient data input for feature extraction branching with gait silhouette as the criterion. In addition, we propose a gait motion feature extraction network based on regional time-series coding. By independently time-series coding the joint motion data within different regions of the body, and then combining the time-series data features of each region with secondary coding, we obtain the unique motion relationships between regions of the body. Finally, bilinear matrix decomposition pooling is used to fuse spatial silhouette features and motion time-series features to obtain complete gait recognition under shorter time-length video input. We use the OUMVLP-Pose and CASIA-B datasets to validate the silhouette image branching and motion time-series branching, respectively, and employ evaluation metrics such as IS entropy value and Rank-1 accuracy to demonstrate the effectiveness of our design network. Finally, we also collect gait-motion data in the real world and test them in a complete two-branch fusion network. The experimental results show that the network we designed can effectively extract the time-series features of human motion and achieve the expansion of multi-view gait data. The real-world tests also prove that our designed method has good results and feasibility in the problem of gait recognition with short-time video as input data. Full article
(This article belongs to the Special Issue Deep Learning Models and Applications to Computer Vision)
Show Figures

Figure 1

15 pages, 5047 KB  
Article
DeepGait: A Learning Deep Convolutional Representation for View-Invariant Gait Recognition Using Joint Bayesian
by Chao Li, Xin Min, Shouqian Sun, Wenqian Lin and Zhichuan Tang
Appl. Sci. 2017, 7(3), 210; https://doi.org/10.3390/app7030210 - 23 Feb 2017
Cited by 90 | Viewed by 8595
Abstract
Human gait, as a soft biometric, helps to recognize people through their walking. To further improve the recognition performance, we propose a novel video sensor-based gait representation, DeepGait, using deep convolutional features and introduce Joint Bayesian to model view variance. DeepGait is generated [...] Read more.
Human gait, as a soft biometric, helps to recognize people through their walking. To further improve the recognition performance, we propose a novel video sensor-based gait representation, DeepGait, using deep convolutional features and introduce Joint Bayesian to model view variance. DeepGait is generated by using a pre-trained “very deep” network “D-Net” (VGG-D) without any fine-tuning. For non-view setting, DeepGait outperforms hand-crafted representations (e.g., Gait Energy Image, Frequency-Domain Feature and Gait Flow Image, etc.). Furthermore, for cross-view setting, 256-dimensional DeepGait after PCA significantly outperforms the state-of-the-art methods on the OU-ISR large population (OULP) dataset. The OULP dataset, which includes 4007 subjects, makes our result reliable in a statistically reliable way. Full article
(This article belongs to the Special Issue Human Activity Recognition)
Show Figures

Figure 1

15 pages, 3419 KB  
Article
Cross View Gait Recognition Using Joint-Direct Linear Discriminant Analysis
by Jose Portillo-Portillo, Roberto Leyva, Victor Sanchez, Gabriel Sanchez-Perez, Hector Perez-Meana, Jesus Olivares-Mercado, Karina Toscano-Medina and Mariko Nakano-Miyatake
Sensors 2017, 17(1), 6; https://doi.org/10.3390/s17010006 - 22 Dec 2016
Cited by 17 | Viewed by 6191
Abstract
This paper proposes a view-invariant gait recognition framework that employs a unique view invariant model that profits from the dimensionality reduction provided by Direct Linear Discriminant Analysis (DLDA). The framework, which employs gait energy images (GEIs), creates a single joint model that accurately [...] Read more.
This paper proposes a view-invariant gait recognition framework that employs a unique view invariant model that profits from the dimensionality reduction provided by Direct Linear Discriminant Analysis (DLDA). The framework, which employs gait energy images (GEIs), creates a single joint model that accurately classifies GEIs captured at different angles. Moreover, the proposed framework also helps to reduce the under-sampling problem (USP) that usually appears when the number of training samples is much smaller than the dimension of the feature space. Evaluation experiments compare the proposed framework’s computational complexity and recognition accuracy against those of other view-invariant methods. Results show improvements in both computational complexity and recognition accuracy. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop