Advances in Pattern Analysis for Identity Recognition and Verification

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (1 November 2020) | Viewed by 14415

Special Issue Editor


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Currently we are witnesses of the fourth industrial revolution also known as Industry 4.0 in Europe, Society 5.0 in Japan and Industrial Internet of Things (IIT) in USA. The main characteristics of this new living model is the massive interaction of humans with machines (smartphones, computers, social robots, etc.) the generation, storing and processing of big data using A.I. algorithms and Internet of Things (IoT).

In this context, there is a continuous increasing need to identify humans and entities in general, by analyzing their patterns. For example, in the coming years humans will interact with the ATM machines without using debit cards but with their biometrics.

Although a significant progress is reported recently, in the field of biometrics by incorporating deep learning models, the task of identifying someone under varying conditions is still open. For example, human identification in the wild or using only few samples of data are cases that worth more attention and research towards better analysis of the patterns describing the identities.

This Special issue aims to summarize the recent advances in extracting and analyzing patterns that are able to identify the owner of them. Topics may include, but are not limited to:

  • Face recognition/verification
  • Iris recognition/verification
  • Fingerprint recognition/verification
  • Ear recognition/verification
  • Vein recognition/verification
  • Gait recognition/verification
  • EEG recognition/verification
  • Behavioral biometrics
  • Multimodal and adaptive biometric systems
  • Adversarial machine learning for biometric systems
  • Feature extraction
  • Storing and processing big biometric data
  • New biometric systems architectures
  • New benchmark datasets
  • New applications

Prof. Dr. George A. Papakostas
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biometrics
  • identity recognition/verification
  • feature extraction

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2018 KiB  
Article
Multimodal Few-Shot Learning for Gait Recognition
by Jucheol Moon, Nhat Anh Le, Nelson Hebert Minaya and Sang-Il Choi
Appl. Sci. 2020, 10(21), 7619; https://doi.org/10.3390/app10217619 - 29 Oct 2020
Cited by 9 | Viewed by 3203
Abstract
A person’s gait is a behavioral trait that is uniquely associated with each individual and can be used to recognize the person. As information about the human gait can be captured by wearable devices, a few studies have led to the proposal of [...] Read more.
A person’s gait is a behavioral trait that is uniquely associated with each individual and can be used to recognize the person. As information about the human gait can be captured by wearable devices, a few studies have led to the proposal of methods to process gait information for identification purposes. Despite recent advances in gait recognition, an open set gait recognition problem presents challenges to current approaches. To address the open set gait recognition problem, a system should be able to deal with unseen subjects who have not included in the training dataset. In this paper, we propose a system that learns a mapping from a multimodal time series collected using insole to a latent (embedding vector) space to address the open set gait recognition problem. The distance between two embedding vectors in the latent space corresponds to the similarity between two multimodal time series. Using the characteristics of the human gait pattern, multimodal time series are sliced into unit steps. The system maps unit steps to embedding vectors using an ensemble consisting of a convolutional neural network and a recurrent neural network. To recognize each individual, the system learns a decision function using a one-class support vector machine from a few embedding vectors of the person in the latent space, then the system determines whether an unknown unit step is recognized as belonging to a known individual. Our experiments demonstrate that the proposed framework recognizes individuals with high accuracy regardless they have been registered or not. If we could have an environment in which all people would be wearing the insole, the framework would be used for user verification widely. Full article
Show Figures

Figure 1

17 pages, 397 KiB  
Article
Robust Deep Speaker Recognition: Learning Latent Representation with Joint Angular Margin Loss
by Labib Chowdhury, Hasib Zunair and Nabeel Mohammed
Appl. Sci. 2020, 10(21), 7522; https://doi.org/10.3390/app10217522 - 26 Oct 2020
Cited by 9 | Viewed by 4344
Abstract
Speaker identification is gaining popularity, with notable applications in security, automation, and authentication. For speaker identification, deep-convolutional-network-based approaches, such as SincNet, are used as an alternative to i-vectors. Convolution performed by parameterized sinc functions in SincNet demonstrated superior results in this area. This [...] Read more.
Speaker identification is gaining popularity, with notable applications in security, automation, and authentication. For speaker identification, deep-convolutional-network-based approaches, such as SincNet, are used as an alternative to i-vectors. Convolution performed by parameterized sinc functions in SincNet demonstrated superior results in this area. This system optimizes softmax loss, which is integrated in the classification layer that is responsible for making predictions. Since the nature of this loss is only to increase interclass distance, it is not always an optimal design choice for biometric-authentication tasks such as face and speaker recognition. To overcome the aforementioned issues, this study proposes a family of models that improve upon the state-of-the-art SincNet model. Proposed models AF-SincNet, Ensemble-SincNet, and ALL-SincNet serve as a potential successor to the successful SincNet model. The proposed models are compared on a number of speaker-recognition datasets, such as TIMIT and LibriSpeech, with their own unique challenges. Performance improvements are demonstrated compared to competitive baselines. In interdataset evaluation, the best reported model not only consistently outperformed the baselines and current prior models, but also generalized well on unseen and diverse tasks such as Bengali speaker recognition. Full article
Show Figures

Figure 1

21 pages, 5392 KiB  
Article
Induction Motor Fault Classification Based on FCBF-PSO Feature Selection Method
by Chun-Yao Lee and Wen-Cheng Lin
Appl. Sci. 2020, 10(15), 5383; https://doi.org/10.3390/app10155383 - 04 Aug 2020
Cited by 4 | Viewed by 2396
Abstract
This study proposes a fast correlation-based filter with particle-swarm optimization method. In FCBF–PSO, the weights of the features selected by the fast correlation-based filter are optimized and combined with backpropagation neural network as a classifier to identify the faults of induction motors. Three [...] Read more.
This study proposes a fast correlation-based filter with particle-swarm optimization method. In FCBF–PSO, the weights of the features selected by the fast correlation-based filter are optimized and combined with backpropagation neural network as a classifier to identify the faults of induction motors. Three significant parts were applied to support the FCBF–PSO. First, Hilbert–Huang transforms were used to analyze the current signals of motor normal, bearing damage, broken rotor bars and short circuits in stator windings. Second, ReliefF, symmetrical uncertainty and FCBF three feature-selection methods were applied to select the important features after the feature was captured. Moreover, the accuracy comparison was performed. Third, particle-swarm optimization (PSO) was combined to optimize the selected feature weights which were used to obtain the best solution. The results showed excellent performance of the FCBF–PSO for the induction motor fault classification such as had fewer feature numbers and better identification ability. In addition, the analyzed of the induction motor fault in this study was applied with the different operating environments, namely, SNR = 40 dB, SNR = 30 dB and SNR = 20 dB. The FCBF–PSO proposed by this research could also get the higher accuracy than typical feature-selection methods of ReliefF, SU and FCBF. Full article
Show Figures

Figure 1

22 pages, 1632 KiB  
Article
Human Skeleton Data Augmentation for Person Identification over Deep Neural Network
by Beom Kwon and Sanghoon Lee
Appl. Sci. 2020, 10(14), 4849; https://doi.org/10.3390/app10144849 - 15 Jul 2020
Cited by 8 | Viewed by 3923
Abstract
With the advancement in pose estimation techniques, skeleton-based person identification has recently received considerable attention in many applications. In this study, a skeleton-based person identification method using a deep neural network (DNN) is investigated. In this method, anthropometric features extracted from the human [...] Read more.
With the advancement in pose estimation techniques, skeleton-based person identification has recently received considerable attention in many applications. In this study, a skeleton-based person identification method using a deep neural network (DNN) is investigated. In this method, anthropometric features extracted from the human skeleton sequence are used as the input to the DNN. However, training the DNN with insufficient training datasets makes the network unstable and may lead to overfitting during the training phase, causing significant performance degradation in the testing phase. To cope with a shortage in the dataset, we investigate novel data augmentation for skeleton-based person identification by utilizing the bilateral symmetry of the human body. To achieve this, augmented vectors are generated by sharing the anthropometric features extracted from one side of the human body with the other and vice versa. Thereby, the total number of anthropometric feature vectors is increased by 256 times, which enables the DNN to be trained while avoiding overfitting. The simulation results demonstrate that the average accuracy of person identification is remarkably improved up to 100% based on the augmentation on public datasets. Full article
Show Figures

Figure 1

Back to TopTop