Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (36)

Search Parameters:
Keywords = soft biometrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 251 KiB  
Article
Comparative Evaluation of Selected Methods for Assessing Gingival Phenotype
by Anna Dziewulska, Luiza Czerniawska-Kliman, Agnieszka Droździk and Katarzyna Grocholewicz
J. Clin. Med. 2025, 14(8), 2669; https://doi.org/10.3390/jcm14082669 - 14 Apr 2025
Viewed by 212
Abstract
Background/Objectives: The diagnostic assessment of soft and hard tissues surrounding the teeth, including gingival phenotype analysis, is critical for clinicians. Since multiple methods for evaluating gingival phenotype have been reported, determining the optimal approach for dental practitioners is essential. This study aimed to [...] Read more.
Background/Objectives: The diagnostic assessment of soft and hard tissues surrounding the teeth, including gingival phenotype analysis, is critical for clinicians. Since multiple methods for evaluating gingival phenotype have been reported, determining the optimal approach for dental practitioners is essential. This study aimed to evaluate gingival phenotype using visual assessment (VA) and the periodontal probe transparency method (PTM) in the maxillary central incisors to confirm the superiority of the latter. Methods: This study included 103 individuals aged 22 to 29 years, all with a healthy periodontium, no history of medications, and no prior treatment affecting the gingiva. Two examiners assessed gingival phenotype using VA and the PTM with color-coded probes. Additionally, direct measurement (DM) with biometric ultrasonography was performed. Results: The correlations among VA, the PTM, and DM (Spearman’s rank correlation test) demonstrated robust consistency (r = 0.62–0.76, p < 0.001). There was medium to high agreement between VA and DM (r = 0.62–0.74, p < 0.001), as well as a medium to strong correlation between VA and the PTM (r = 0.63–0.76, p < 0.001), indicating no superiority of the color-coded probe transparency method. Conclusions: Both VA and the PTM with a color-coded probe are reliable for identifying the gingival phenotype in the maxillary anterior region when compared to direct biometric measurement. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
25 pages, 7344 KiB  
Article
Front-to-Side Hard and Soft Biometrics for Augmented Zero-Shot Side Face Recognition
by Ahuod Hameed Alsubhi and Emad Sami Jaha
Sensors 2025, 25(6), 1638; https://doi.org/10.3390/s25061638 - 7 Mar 2025
Viewed by 663
Abstract
Face recognition is a fundamental and versatile technology widely used to identify individuals. The human face is a significant nonintrusive biometric modality, attracting numerous research studies. Still, much less focus has been on side-face views, with the majority merely or mainly concentrating on [...] Read more.
Face recognition is a fundamental and versatile technology widely used to identify individuals. The human face is a significant nonintrusive biometric modality, attracting numerous research studies. Still, much less focus has been on side-face views, with the majority merely or mainly concentrating on the frontal face. Despite offering fewer traits than the front viewpoint, the side viewpoint of the face is a crucial aspect of an individual’s identity and, in numerous cases, can be the only available information. Our research proposes new soft biometric traits based on the face anthropometric that can be invariantly extracted from the front and side face. We aim to extract and fuse them with vision-based deep features to augment zero-shot side face recognition. Our framework uses the person’s front face information solely for training, then uses their side face information as the only query for biometric matching and identification. For performance evaluation and comparison of the proposed approach, several feature-level fusion experiments were conducted on the CMU Multi-PIE dataset. Our results demonstrate that fusing the proposed face soft traits with the ResNet-50 deep features significantly improves performance. Furthermore, adding global soft biometrics to them improves the accuracy by up to 23%. Full article
(This article belongs to the Special Issue Deep Learning Based Face Recognition and Feature Extraction)
Show Figures

Figure 1

21 pages, 7041 KiB  
Article
Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model
by Mohammed Assiri
Electronics 2025, 14(4), 692; https://doi.org/10.3390/electronics14040692 - 11 Feb 2025
Viewed by 567
Abstract
The fast development of digital images and the improvement required for security measures have recently increased the demand for innovative image analysis methods. Image analysis identifies, classifies, and monitors people, events, or objects in images or videos. Image analysis significantly improves security by [...] Read more.
The fast development of digital images and the improvement required for security measures have recently increased the demand for innovative image analysis methods. Image analysis identifies, classifies, and monitors people, events, or objects in images or videos. Image analysis significantly improves security by identifying and preventing attacks on security applications through digital images. It is crucial in diverse security fields, comprising video analysis, anomaly detection, biometrics, object recognition, surveillance, and forensic investigations. By integrating advanced software engineering models with IoT capabilities, this technique revolutionizes copy–move image forgery detection. IoT devices collect and transmit real-world data, improving software solutions to detect and analyze image tampering with exceptional accuracy and efficiency. This combination enhances detection abilities and provides scalable and adaptive solutions to reduce cutting-edge forgery models. Copy–move forgery detection (CMFD) has become possibly a major active research domain in the blind image forensics area. Between existing approaches, most of them are dependent upon block and key-point methods or integration of them. A few deep convolutional neural networks (DCNN) techniques have been implemented in image hashing, image forensics, image retrieval, image classification, etc., that have performed better than the conventional methods. To accomplish robust CMFD, this study develops a fusion of soft computing with a deep learning-based CMFD approach (FSCDL-CMFDA) to secure digital images. The FSCDL-CMFDA approach aims to integrate the benefits of metaheuristics with the DL model for an enhanced CMFD process. In the FSCDL-CMFDA method, histogram equalization is initially performed to improve the image quality. Furthermore, the Siamese convolutional neural network (SCNN) model is used to learn complex features from pre-processed images. Its hyperparameters are chosen by the golden jackal optimization (GJO) model. For the CMFD process, the FSCDL-CMFDA technique employs the regularized extreme learning machine (RELM) classifier. Finally, the detection performance of the RELM method is improved by the beluga whale optimization (BWO) technique. To demonstrate the enhanced performance of the FSCDL-CMFDA method, a comprehensive outcome analysis is conducted using the MNIST and CIFAR datasets. The experimental validation of the FSCDL-CMFDA method portrayed a superior accuracy value of 98.12% over existing models. Full article
(This article belongs to the Special Issue Signal and Image Processing Applications in Artificial Intelligence)
Show Figures

Figure 1

16 pages, 3617 KiB  
Article
KD-Net: Continuous-Keystroke-Dynamics-Based Human Identification from RGB-D Image Sequences
by Xinxin Dai, Ran Zhao, Pengpeng Hu and Adrian Munteanu
Sensors 2023, 23(20), 8370; https://doi.org/10.3390/s23208370 - 10 Oct 2023
Cited by 1 | Viewed by 1713
Abstract
Keystroke dynamics is a soft biometric based on the assumption that humans always type in uniquely characteristic manners. Previous works mainly focused on analyzing the key press or release events. Unlike these methods, we explored a novel visual modality of keystroke dynamics for [...] Read more.
Keystroke dynamics is a soft biometric based on the assumption that humans always type in uniquely characteristic manners. Previous works mainly focused on analyzing the key press or release events. Unlike these methods, we explored a novel visual modality of keystroke dynamics for human identification using a single RGB-D sensor. In order to verify this idea, we created a dataset dubbed KD-MultiModal, which contains 243.2 K frames of RGB images and depth images, obtained by recording a video of hand typing with a single RGB-D sensor. The dataset comprises RGB-D image sequences of 20 subjects (10 males and 10 females) typing sentences, and each subject typed around 20 sentences. In the task, only the hand and keyboard region contributed to the person identification, so we also propose methods of extracting Regions of Interest (RoIs) for each type of data. Unlike the data of the key press or release, our dataset not only captures the velocity of pressing and releasing different keys and the typing style of specific keys or combinations of keys, but also contains rich information on the hand shape and posture. To verify the validity of our proposed data, we adopted deep neural networks to learn distinguishing features from different data representations, including RGB-KD-Net, D-KD-Net, and RGBD-KD-Net. Simultaneously, the sequence of point clouds also can be obtained from depth images given the intrinsic parameters of the RGB-D sensor, so we also studied the performance of human identification based on the point clouds. Extensive experimental results showed that our idea works and the performance of the proposed method based on RGB-D images is the best, which achieved 99.44% accuracy based on the unseen real-world data. To inspire more researchers and facilitate relevant studies, the proposed dataset will be publicly accessible together with the publication of this paper. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

23 pages, 5380 KiB  
Article
SoftVein-WELM: A Weighted Extreme Learning Machine Model for Soft Biometrics on Palm Vein Images
by David Zabala-Blanco, Ruber Hernández-García and Ricardo J. Barrientos
Electronics 2023, 12(17), 3608; https://doi.org/10.3390/electronics12173608 - 26 Aug 2023
Cited by 5 | Viewed by 1610
Abstract
Contactless biometric technologies such as palm vein recognition have gained more relevance in the present and immediate future due to the COVID-19 pandemic. Since certain soft biometrics like gender and age can generate variations in the visualization of palm vein patterns, these soft [...] Read more.
Contactless biometric technologies such as palm vein recognition have gained more relevance in the present and immediate future due to the COVID-19 pandemic. Since certain soft biometrics like gender and age can generate variations in the visualization of palm vein patterns, these soft traits can reduce the penetration rate on large-scale databases for mass individual recognition. Due to the limited availability of public databases, few works report on the existing approaches to gender and age classification through vein pattern images. Moreover, soft biometric classification commonly faces the problem of imbalanced data class distributions, representing a limitation of the reported approaches. This paper introduces weighted extreme learning machine (W-ELM) models for gender and age classification based on palm vein images to address imbalanced data problems, improving the classification performance. The highlights of our proposal are that it avoids using a feature extraction process and can incorporate a weight matrix in optimizing the ELM model by exploiting the imbalanced nature of the data, which guarantees its application in realistic scenarios. In addition, we evaluate a new class distribution for soft biometrics on the VERA dataset and a new multi-label scheme identifying gender and age simultaneously. The experimental results demonstrate that both evaluated W-ELM models outperform previous existing approaches and a novel CNN-based method in terms of the accuracy and G-mean metrics, achieving accuracies of 98.91% and 99.53% for gender classification on VERA and PolyU, respectively. In more challenging scenarios for age and gender–age classifications on the VERA dataset, the proposed method reaches accuracies of 97.05% and 96.91%, respectively. The multi-label classification results suggest that further studies can be conducted on multi-task ELM for palm vein recognition. Full article
Show Figures

Figure 1

29 pages, 14418 KiB  
Article
Classification of Ethnicity Using Efficient CNN Models on MORPH and FERET Datasets Based on Face Biometrics
by Abdulwahid Al Abdulwahid
Appl. Sci. 2023, 13(12), 7288; https://doi.org/10.3390/app13127288 - 19 Jun 2023
Cited by 2 | Viewed by 5872
Abstract
Ethnic conflicts frequently lead to violations of human rights, such as genocide and crimes against humanity, as well as economic collapse, governmental failure, environmental problems, and massive influxes of refugees. Many innocent people suffer as a result of violent ethnic conflict. People’s ethnicity [...] Read more.
Ethnic conflicts frequently lead to violations of human rights, such as genocide and crimes against humanity, as well as economic collapse, governmental failure, environmental problems, and massive influxes of refugees. Many innocent people suffer as a result of violent ethnic conflict. People’s ethnicity can pose a threat to their safety. There have been many studies on the topic of how to categorize people by race. Until recently, the majority of the work on face biometrics had been conducted on the problem of person recognition from a photograph. However, other softer biometrics such as a person’s age, gender, race, or emotional state are also crucial. The subject of ethnic classification has many potential uses and is developing rapidly. This study summarizes recent advances in ethnicity categorization by utilizing efficient models of convolutional neural networks (CNNs) and focusing on the central portion of the face alone. This article contrasts the results of two distinct CNN models. To put the suggested models through their paces, the study employed holdout testing on the MORPH and FERET datasets. It is essential to remember that this study’s results were generated by focusing on the face’s central region alone, which saved both time and effort. Classification into four classes was achieved with an accuracy of 85% using Model A and 86% using Model B. Consequently, classifying people according to their ethnicity as a fundamental part of the video surveillance systems used at checkpoints is an excellent concept. This categorization statement may also be helpful for picture-search queries. Full article
Show Figures

Figure 1

33 pages, 5606 KiB  
Article
Compact-Fusion Feature Framework for Ethnicity Classification
by Tjokorda Agung Budi Wirayuda, Rinaldi Munir and Achmad Imam Kistijantoro
Informatics 2023, 10(2), 51; https://doi.org/10.3390/informatics10020051 - 12 Jun 2023
Cited by 1 | Viewed by 2012
Abstract
In computer vision, ethnicity classification tasks utilize images containing human faces to extract ethnicity labels. Ethnicity is one of the soft biometric feature categories useful in data analysis for commercial, public, and health sectors. Ethnicity classification begins with face detection as a preprocessing [...] Read more.
In computer vision, ethnicity classification tasks utilize images containing human faces to extract ethnicity labels. Ethnicity is one of the soft biometric feature categories useful in data analysis for commercial, public, and health sectors. Ethnicity classification begins with face detection as a preprocessing process to determine a human’s presence; then, the feature representation is extracted from the isolated facial image to predict the ethnicity class. This study utilized four handcrafted features (multi-local binary pattern (MLBP), histogram of gradient (HOG), color histogram, and speeded-up-robust-features-based (SURF-based)) as the basis for the generation of a compact-fusion feature. The compact-fusion framework involves optimal feature selection, compact feature extraction, and compact-fusion feature representation. The final feature representation was trained and tested with the SVM One Versus All classifier for ethnicity classification. When it was evaluated in two large datasets, UTKFace and Fair Face, the proposed framework achieved accuracy levels of 89.14%, 82.19%, and 73.87%, respectively, for the UTKFace dataset with four or five classes and the Fair Face dataset with four classes. Furthermore, the compact-fusion feature with a small number of features at 4790, constructed based on conventional handcrafted features, achieved competitive results compared with state-of-the-art methods using a deep-learning-based approach. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

13 pages, 1462 KiB  
Article
A Fast Deep Learning ECG Sex Identifier Based on Wavelet RGB Image Classification
by Jose-Luis Cabra Lopez, Carlos Parra and Gonzalo Forero
Data 2023, 8(6), 97; https://doi.org/10.3390/data8060097 - 29 May 2023
Cited by 4 | Viewed by 2860
Abstract
Human sex recognition with electrocardiogram signals is an emerging area in machine learning, mostly oriented toward neural network approaches. It might be the beginning of a field of heart behavior analysis focused on sex. However, a person’s heartbeat changes during daily activities, which [...] Read more.
Human sex recognition with electrocardiogram signals is an emerging area in machine learning, mostly oriented toward neural network approaches. It might be the beginning of a field of heart behavior analysis focused on sex. However, a person’s heartbeat changes during daily activities, which could compromise the classification. In this paper, with the intention of capturing heartbeat dynamics, we divided the heart rate into different intervals, creating a specialized identification model for each interval. The sexual differentiation for each model was performed with a deep convolutional neural network from images that represented the RGB wavelet transformation of ECG pseudo-orthogonal X, Y, and Z signals, using sufficient samples to train the network. Our database included 202 people, with a female-to-male population ratio of 49.5–50.5% and an observation period of 24 h per person. As our main goal, we looked for periods of time during which the classification rate of sex recognition was higher and the process was faster; in fact, we identified intervals in which only one heartbeat was required. We found that for each heart rate interval, the best accuracy score varied depending on the number of heartbeats collected. Furthermore, our findings indicated that as the heart rate increased, fewer heartbeats were needed for analysis. On average, our proposed model reached an accuracy of 94.82% ± 1.96%. The findings of this investigation provide a heartbeat acquisition procedure for ECG sex recognition systems. In addition, our results encourage future research to include sex as a soft biometric characteristic in person identification scenarios and for cardiology studies, in which the detection of specific male or female anomalies could help autonomous learning machines move toward specialized health applications. Full article
(This article belongs to the Special Issue Signal Processing for Data Mining)
Show Figures

Figure 1

13 pages, 1991 KiB  
Article
Deep Learning Pet Identification Using Face and Body
by Elham Azizi and Loutfouz Zaman
Information 2023, 14(5), 278; https://doi.org/10.3390/info14050278 - 8 May 2023
Cited by 3 | Viewed by 7642
Abstract
According to the American Humane Association, millions of cats and dogs are lost yearly. Only a few thousand of them are found and returned home. In this work, we use deep learning to help expedite the procedure of finding lost cats and dogs, [...] Read more.
According to the American Humane Association, millions of cats and dogs are lost yearly. Only a few thousand of them are found and returned home. In this work, we use deep learning to help expedite the procedure of finding lost cats and dogs, for which a new dataset is collected. We applied transfer learning methods on different convolutional neural networks for species classification and animal identification. The framework consists of seven sequential layers: data preprocessing, species classification, face and body detection with landmark detection techniques, face alignment, identification, animal soft biometrics, and recommendation. We achieved an accuracy of 98.18% on species classification. In the face identification layer, 80% accuracy was achieved. Body identification resulted in 81% accuracy. When using body identification in addition to face identification, the accuracy increased to 86.5%, with a 100% chance that the animal would be in our top 10 recommendations of matching. By incorporating animals’ soft biometric information, the system can identify animals with 92% confidence. Full article
Show Figures

Figure 1

14 pages, 3182 KiB  
Article
Locality-Sensitive Hashing of Soft Biometrics for Efficient Face Image Database Search and Retrieval
by Ameerah Abdullah Alshahrani and Emad Sami Jaha
Electronics 2023, 12(6), 1360; https://doi.org/10.3390/electronics12061360 - 13 Mar 2023
Cited by 3 | Viewed by 2630
Abstract
As multimedia technology has advanced in recent years, the use of enormous image libraries has dramatically expanded. In applications for image processing, image retrieval has emerged as a crucial technique. Content-based face image retrieval is a well-established technology in many real-world applications, such [...] Read more.
As multimedia technology has advanced in recent years, the use of enormous image libraries has dramatically expanded. In applications for image processing, image retrieval has emerged as a crucial technique. Content-based face image retrieval is a well-established technology in many real-world applications, such as social media, where dependable retrieval capabilities are required to enable quick search among large numbers of images. Humans frequently use faces to recognize and identify individuals. Face recognition from official or personal photos is becoming increasingly popular as it can aid crime detectives in identifying victims and criminals. Furthermore, a large number of images requires a large amount of storage, and the process of image comparison and matching, consequently, takes longer. Hence, the query speed and low storage consumption of hash-based image retrieval techniques have garnered a considerable amount of interest. The main contribution of this work is to try to overcome the challenge of performance improvement in image retrieval by using locality-sensitive hashing (LSH) for retrieving top-matched face images from large-scale databases. We use face soft biometrics as a search input and propose an effective LSH-based method to replace standard face soft biometrics with their corresponding hash codes for searching a large-scale face database and retrieving the top-k of the matching face images with higher accuracy in less time. The experimental results, using the Labeled Faces in the Wild (LFW) database together with the corresponding database of attributes (LFW-attributes), show that our proposed method using LSH face soft biometrics (Soft BioHash) improves the performance of face image database search and retrieval and also outperforms the LSH hard face biometrics method (Hard BioHash). Full article
(This article belongs to the Special Issue Intelligent Face Recognition and Multiple Applications)
Show Figures

Figure 1

17 pages, 3859 KiB  
Article
Periocular Data Fusion for Age and Gender Classification
by Carmen Bisogni, Lucia Cascone and Fabio Narducci
J. Imaging 2022, 8(11), 307; https://doi.org/10.3390/jimaging8110307 - 9 Nov 2022
Cited by 2 | Viewed by 1873
Abstract
In recent years, the study of soft biometrics has gained increasing interest in the security and business sectors. These characteristics provide limited biometric information about the individual; hence, it is possible to increase performance by combining numerous data sources to overcome the accuracy [...] Read more.
In recent years, the study of soft biometrics has gained increasing interest in the security and business sectors. These characteristics provide limited biometric information about the individual; hence, it is possible to increase performance by combining numerous data sources to overcome the accuracy limitations of a single trait. In this research, we provide a study on the fusion of periocular features taken from pupils, fixations, and blinks to achieve a demographic classification, i.e., by age and gender. A data fusion approach is implemented for this purpose. To build a trust evaluation of the selected biometric traits, we first employ a concatenation scheme for fusion at the feature level and, at the score level, transformation and classifier-based score fusion approaches (e.g., weighted sum, weighted product, Bayesian rule, etc.). Data fusion enables improved performance and the synthesis of acquired information, as well as its secure storage and protection of the multi-biometric system’s original biometric models. The combination of these soft biometrics characteristics combines flawlessly the need to protect individual privacy and to have a strong discriminatory element. The results are quite encouraging, with an age classification accuracy of 84.45% and a gender classification accuracy of 84.62%, respectively. The results obtained encourage the studies on periocular area to detect soft biometrics to be applied when the lower part of the face is not visible. Full article
(This article belongs to the Special Issue Multi-Biometric and Multi-Modal Authentication)
Show Figures

Figure 1

17 pages, 18540 KiB  
Article
Robust Identification and Segmentation of the Outer Skin Layers in Volumetric Fingerprint Data
by Alexander Kirfel, Tobias Scheer, Norbert Jung and Christoph Busch
Sensors 2022, 22(21), 8229; https://doi.org/10.3390/s22218229 - 27 Oct 2022
Cited by 5 | Viewed by 2651
Abstract
Despite the long history of fingerprint biometrics and its use to authenticate individuals, there are still some unsolved challenges with fingerprint acquisition and presentation attack detection (PAD). Currently available commercial fingerprint capture devices struggle with non-ideal skin conditions, including soft skin in infants. [...] Read more.
Despite the long history of fingerprint biometrics and its use to authenticate individuals, there are still some unsolved challenges with fingerprint acquisition and presentation attack detection (PAD). Currently available commercial fingerprint capture devices struggle with non-ideal skin conditions, including soft skin in infants. They are also susceptible to presentation attacks, which limits their applicability in unsupervised scenarios such as border control. Optical coherence tomography (OCT) could be a promising solution to these problems. In this work, we propose a digital signal processing chain for segmenting two complementary fingerprints from the same OCT fingertip scan: One fingerprint is captured as usual from the epidermis (“outer fingerprint”), whereas the other is taken from inside the skin, at the junction between the epidermis and the underlying dermis (“inner fingerprint”). The resulting 3D fingerprints are then converted to a conventional 2D grayscale representation from which minutiae points can be extracted using existing methods. Our approach is device-independent and has been proven to work with two different time domain OCT scanners. Using efficient GPGPU computing, it took less than a second to process an entire gigabyte of OCT data. To validate the results, we captured OCT fingerprints of 130 individual fingers and compared them with conventional 2D fingerprints of the same fingers. We found that both the outer and inner OCT fingerprints were backward compatible with conventional 2D fingerprints, with the inner fingerprint generally being less damaged and, therefore, more reliable. Full article
(This article belongs to the Special Issue Biometric Technologies Based on Optical Coherence Tomography (OCT))
Show Figures

Figure 1

22 pages, 601 KiB  
Review
Facial Age Estimation Using Machine Learning Techniques: An Overview
by Khaled ELKarazle, Valliappan Raman and Patrick Then
Big Data Cogn. Comput. 2022, 6(4), 128; https://doi.org/10.3390/bdcc6040128 - 26 Oct 2022
Cited by 23 | Viewed by 14199
Abstract
Automatic age estimation from facial images is an exciting machine learning topic that has attracted researchers’ attention over the past several years. Numerous human–computer interaction applications, such as targeted marketing, content access control, or soft-biometrics systems, employ age estimation models to carry out [...] Read more.
Automatic age estimation from facial images is an exciting machine learning topic that has attracted researchers’ attention over the past several years. Numerous human–computer interaction applications, such as targeted marketing, content access control, or soft-biometrics systems, employ age estimation models to carry out secondary tasks such as user filtering or identification. Despite the vast array of applications that could benefit from automatic age estimation, building an automatic age estimation system comes with issues such as data disparity, the unique ageing pattern of each individual, and facial photo quality. This paper provides a survey on the standard methods of building automatic age estimation models, the benchmark datasets for building these models, and some of the latest proposed pieces of literature that introduce new age estimation methods. Finally, we present and discuss the standard evaluation metrics used to assess age estimation models. In addition to the survey, we discuss the identified gaps in the reviewed literature and present recommendations for future research. Full article
Show Figures

Figure 1

15 pages, 9948 KiB  
Article
Utilizing Spatio Temporal Gait Pattern and Quadratic SVM for Gait Recognition
by Hajra Masood and Humera Farooq
Electronics 2022, 11(15), 2386; https://doi.org/10.3390/electronics11152386 - 30 Jul 2022
Cited by 6 | Viewed by 2296
Abstract
This study aimed to develop a vision-based gait recognition system for person identification. Gait is the soft biometric trait recognizable from low-resolution surveillance videos, where the face and other hard biometrics are not even extractable. The gait is a cycle pattern of human [...] Read more.
This study aimed to develop a vision-based gait recognition system for person identification. Gait is the soft biometric trait recognizable from low-resolution surveillance videos, where the face and other hard biometrics are not even extractable. The gait is a cycle pattern of human body locomotion that consists of two sequential phases: swing and stance. The gait features of the complete gait cycle, referred to as gait signature, can be used for person identification. The proposed work utilizes gait dynamics for gait feature extraction. For this purpose, the spatio temporal power spectral gait features are utilized for gait dynamics captured through sub-pixel motion estimation, and they are less affected by the subject’s appearance. The spatio temporal power spectral gait features are utilized for a quadratic support vector machine classifier for gait recognition aiming for person identification. Spatio temporal power spectral preserves the spatiotemporal gait features and is adaptable for a quadratic support vector machine classifier-based gait recognition across different views and appearances. We have evaluated the gait features and support vector machine classifier-based gait recognition on a locally collected gait dataset that captures the effect of view variance in high scene depth videos. The proposed gait recognition technique achieves significant accuracy across all appearances and views. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Big Data Analysis)
Show Figures

Figure 1

15 pages, 2982 KiB  
Article
An Intelligent Gender Classification System in the Era of Pandemic Chaos with Veiled Faces
by Jawad Rasheed, Sadaf Waziry, Shtwai Alsubai and Adnan M. Abu-Mahfouz
Processes 2022, 10(7), 1427; https://doi.org/10.3390/pr10071427 - 21 Jul 2022
Cited by 15 | Viewed by 3503
Abstract
In the world of chaos, the pandemic has driven individuals around the globe to wear face masks for preventing the virus’s transmission, however, this has made it difficult to determine the gender of the person wearing a mask. Gender information is part of [...] Read more.
In the world of chaos, the pandemic has driven individuals around the globe to wear face masks for preventing the virus’s transmission, however, this has made it difficult to determine the gender of the person wearing a mask. Gender information is part of soft biometrics, which provides extra information about a person’s identification, thus, identifying a gender based on a veiled face is among the urgent challenges that must be advocated for in the next decade. Therefore, this study exploited various pre-trained deep learning networks (DenseNet121, DenseNet169, ResNet50, ResNet101, Xception, InceptionV3, MobileNetV2, EfficientNetB0, and VGG16) to analyze the effect of the mask while identifying the gender using facial images of human beings. The study comprises two strategies. First, the experimental part involves the training of models using facial images with and without masks, while the second strategy considers images with masks only, to train the pre-trained models. Experimental results reveal that DenseNet121 and Xception networks performed well for both strategies. Besides this, the Inception network outperformed all others by attaining 98.75% accuracy for the first strategy, whereas EfficientNetB0 performed well for the second strategy by securing 97.27%. Moreover, results suggest that facemasks evidently impact the performance of state-of-the-art pre-trained networks for gender classification. Full article
(This article belongs to the Special Issue Recent Advances in Machine Learning and Applications)
Show Figures

Figure 1

Back to TopTop