Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = soft biometrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1452 KB  
Article
A User-Centric Context-Aware Framework for Real-Time Optimisation of Multimedia Data Privacy Protection, and Information Retention Within Multimodal AI Systems
by Ndricim Topalli and Atta Badii
Sensors 2025, 25(19), 6105; https://doi.org/10.3390/s25196105 - 3 Oct 2025
Viewed by 486
Abstract
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research [...] Read more.
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research proposes a user-centric, context-aware, and ontology-driven privacy protection framework that dynamically adjusts privacy decisions based on user-defined preferences, entity sensitivity, and contextual information. The framework integrates state-of-the-art recognition models for recognising faces, objects, scenes, actions, and emotions in real time on data acquired from vision sensors (e.g., cameras). Privacy decisions are directed by a contextual ontology based in Contextual Integrity theory, which classifies entities into private, semi-private, or public categories. Adaptive privacy levels are enforced through obfuscation techniques and a multi-level privacy model that supports user-defined red lines (e.g., “always hide logos”). The framework also proposes a Re-Identifiability Index (RII) using soft biometric features such as gait, hairstyle, clothing, skin tone, age, and gender, to mitigate identity leakage and to support fallback protection when face recognition fails. The experimental evaluation relied on sensor-captured datasets, which replicate real-world image sensors such as surveillance cameras. User studies confirmed that the framework was effective, with over 85.2% of participants rating the obfuscation operations as highly effective, and the other 14.8% stating that obfuscation was adequately effective. Amongst these, 71.4% considered the balance between privacy protection and usability very satisfactory and 28% found it satisfactory. GPU acceleration was deployed to enable real-time performance of these models by reducing frame processing time from 1200 ms (CPU) to 198 ms. This ontology-driven framework employs user-defined red lines, contextual reasoning, and dual metrics (RII/IVI) to dynamically balance privacy protection with scene intelligibility. Unlike current anonymisation methods, the framework provides a real-time, user-centric, and GDPR-compliant method that operationalises privacy-by-design while preserving scene intelligibility. These features make the framework appropriate to a variety of real-world applications including healthcare, surveillance, and social media. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

25 pages, 4706 KB  
Article
Transfer Learning-Based Distance-Adaptive Global Soft Biometrics Prediction in Surveillance
by Sonjoy Ranjon Das, Henry Onilude, Bilal Hassan, Preeti Patel and Karim Ouazzane
Electronics 2025, 14(18), 3719; https://doi.org/10.3390/electronics14183719 - 19 Sep 2025
Viewed by 363
Abstract
Soft biometric prediction—including age, gender, and ethnicity—is critical in surveillance applications, yet often suffers from performance degradation as the subject-to-camera distance increases. This study hypothesizes that embedding distance-awareness into the training process can mitigate such degradation and enhance model generalization across varying visual [...] Read more.
Soft biometric prediction—including age, gender, and ethnicity—is critical in surveillance applications, yet often suffers from performance degradation as the subject-to-camera distance increases. This study hypothesizes that embedding distance-awareness into the training process can mitigate such degradation and enhance model generalization across varying visual conditions. We propose a distance-adaptive, multi-task deep learning framework built upon EfficientNetB3, augmented with task-specific heads and trained progressively across four distance intervals (4 m to 10 m). A weighted composite loss function is employed to balance classification and regression objectives. The model is evaluated on a hybrid dataset combining the Front-View Gait (FVG) and MMV annotated pedestrian datasets, totaling over 19,000 samples. Experimental results demonstrate that the framework achieves up to 95% gender classification accuracy at 4 m and retains 85% accuracy at 10 m. Ethnicity prediction maintains an accuracy above 65%, while age estimation achieves a mean absolute error (MAE) ranging from 1.1 to 1.5 years. These findings validate the model’s robustness across distances and its superiority over conventional static learning approaches. Despite challenges such as computational overhead and annotation demands, the proposed approach offers a scalable and real-time-capable solution for distance-resilient biometric systems. Full article
Show Figures

Figure 1

17 pages, 1798 KB  
Article
From One Domain to Another: The Pitfalls of Gender Recognition in Unseen Environments
by Nzakiese Mbongo, Kailash A. Hambarde and Hugo Proença
Sensors 2025, 25(13), 4161; https://doi.org/10.3390/s25134161 - 4 Jul 2025
Viewed by 526
Abstract
Gender recognition from pedestrian imagery is acknowledged by many as a quasi-solved problem, yet most existing approaches evaluate performance in a within-domain setting, i.e., when the test and training data, though disjoint, closely resemble each other. This work provides the first exhaustive cross-domain [...] Read more.
Gender recognition from pedestrian imagery is acknowledged by many as a quasi-solved problem, yet most existing approaches evaluate performance in a within-domain setting, i.e., when the test and training data, though disjoint, closely resemble each other. This work provides the first exhaustive cross-domain assessment of six architectures considered to represent the state of the art: ALM, VAC, Rethinking, LML, YinYang-Net, and MAMBA, across three widely known benchmarks: PA-100K, PETA, and RAP. All train/test combinations between datasets were evaluated, yielding 54 comparable experiments. The results revealed a performance split: median in-domain F1 approached 90% in most models, while the average drop under domain shift was up to 16.4 percentage points, with the most recent approaches degrading the most. The adaptive-masking ALM achieved an F1 above 80% in most transfer scenarios, particularly those involving high-resolution or pose-stable domains, highlighting the importance of strong inductive biases over architectural novelty alone. Further, to characterize robustness quantitatively, we introduced the Unified Robustness Metric (URM), which integrates the average cross-domain degradation performance into a single score. A qualitative saliency analysis also corroborated the numerical findings by exposing over-confidence and contextual bias in misclassifications. Overall, this study suggests that challenges in gender recognition are much more evident in cross-domain settings than under the commonly reported within-domain context. Finally, we formalize an open evaluation protocol that can serve as a baseline for future works of this kind. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

11 pages, 251 KB  
Article
Comparative Evaluation of Selected Methods for Assessing Gingival Phenotype
by Anna Dziewulska, Luiza Czerniawska-Kliman, Agnieszka Droździk and Katarzyna Grocholewicz
J. Clin. Med. 2025, 14(8), 2669; https://doi.org/10.3390/jcm14082669 - 14 Apr 2025
Viewed by 1542
Abstract
Background/Objectives: The diagnostic assessment of soft and hard tissues surrounding the teeth, including gingival phenotype analysis, is critical for clinicians. Since multiple methods for evaluating gingival phenotype have been reported, determining the optimal approach for dental practitioners is essential. This study aimed to [...] Read more.
Background/Objectives: The diagnostic assessment of soft and hard tissues surrounding the teeth, including gingival phenotype analysis, is critical for clinicians. Since multiple methods for evaluating gingival phenotype have been reported, determining the optimal approach for dental practitioners is essential. This study aimed to evaluate gingival phenotype using visual assessment (VA) and the periodontal probe transparency method (PTM) in the maxillary central incisors to confirm the superiority of the latter. Methods: This study included 103 individuals aged 22 to 29 years, all with a healthy periodontium, no history of medications, and no prior treatment affecting the gingiva. Two examiners assessed gingival phenotype using VA and the PTM with color-coded probes. Additionally, direct measurement (DM) with biometric ultrasonography was performed. Results: The correlations among VA, the PTM, and DM (Spearman’s rank correlation test) demonstrated robust consistency (r = 0.62–0.76, p < 0.001). There was medium to high agreement between VA and DM (r = 0.62–0.74, p < 0.001), as well as a medium to strong correlation between VA and the PTM (r = 0.63–0.76, p < 0.001), indicating no superiority of the color-coded probe transparency method. Conclusions: Both VA and the PTM with a color-coded probe are reliable for identifying the gingival phenotype in the maxillary anterior region when compared to direct biometric measurement. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
25 pages, 7344 KB  
Article
Front-to-Side Hard and Soft Biometrics for Augmented Zero-Shot Side Face Recognition
by Ahuod Hameed Alsubhi and Emad Sami Jaha
Sensors 2025, 25(6), 1638; https://doi.org/10.3390/s25061638 - 7 Mar 2025
Cited by 2 | Viewed by 1647
Abstract
Face recognition is a fundamental and versatile technology widely used to identify individuals. The human face is a significant nonintrusive biometric modality, attracting numerous research studies. Still, much less focus has been on side-face views, with the majority merely or mainly concentrating on [...] Read more.
Face recognition is a fundamental and versatile technology widely used to identify individuals. The human face is a significant nonintrusive biometric modality, attracting numerous research studies. Still, much less focus has been on side-face views, with the majority merely or mainly concentrating on the frontal face. Despite offering fewer traits than the front viewpoint, the side viewpoint of the face is a crucial aspect of an individual’s identity and, in numerous cases, can be the only available information. Our research proposes new soft biometric traits based on the face anthropometric that can be invariantly extracted from the front and side face. We aim to extract and fuse them with vision-based deep features to augment zero-shot side face recognition. Our framework uses the person’s front face information solely for training, then uses their side face information as the only query for biometric matching and identification. For performance evaluation and comparison of the proposed approach, several feature-level fusion experiments were conducted on the CMU Multi-PIE dataset. Our results demonstrate that fusing the proposed face soft traits with the ResNet-50 deep features significantly improves performance. Furthermore, adding global soft biometrics to them improves the accuracy by up to 23%. Full article
(This article belongs to the Special Issue Deep Learning Based Face Recognition and Feature Extraction)
Show Figures

Figure 1

21 pages, 7041 KB  
Article
Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model
by Mohammed Assiri
Electronics 2025, 14(4), 692; https://doi.org/10.3390/electronics14040692 - 11 Feb 2025
Cited by 1 | Viewed by 989
Abstract
The fast development of digital images and the improvement required for security measures have recently increased the demand for innovative image analysis methods. Image analysis identifies, classifies, and monitors people, events, or objects in images or videos. Image analysis significantly improves security by [...] Read more.
The fast development of digital images and the improvement required for security measures have recently increased the demand for innovative image analysis methods. Image analysis identifies, classifies, and monitors people, events, or objects in images or videos. Image analysis significantly improves security by identifying and preventing attacks on security applications through digital images. It is crucial in diverse security fields, comprising video analysis, anomaly detection, biometrics, object recognition, surveillance, and forensic investigations. By integrating advanced software engineering models with IoT capabilities, this technique revolutionizes copy–move image forgery detection. IoT devices collect and transmit real-world data, improving software solutions to detect and analyze image tampering with exceptional accuracy and efficiency. This combination enhances detection abilities and provides scalable and adaptive solutions to reduce cutting-edge forgery models. Copy–move forgery detection (CMFD) has become possibly a major active research domain in the blind image forensics area. Between existing approaches, most of them are dependent upon block and key-point methods or integration of them. A few deep convolutional neural networks (DCNN) techniques have been implemented in image hashing, image forensics, image retrieval, image classification, etc., that have performed better than the conventional methods. To accomplish robust CMFD, this study develops a fusion of soft computing with a deep learning-based CMFD approach (FSCDL-CMFDA) to secure digital images. The FSCDL-CMFDA approach aims to integrate the benefits of metaheuristics with the DL model for an enhanced CMFD process. In the FSCDL-CMFDA method, histogram equalization is initially performed to improve the image quality. Furthermore, the Siamese convolutional neural network (SCNN) model is used to learn complex features from pre-processed images. Its hyperparameters are chosen by the golden jackal optimization (GJO) model. For the CMFD process, the FSCDL-CMFDA technique employs the regularized extreme learning machine (RELM) classifier. Finally, the detection performance of the RELM method is improved by the beluga whale optimization (BWO) technique. To demonstrate the enhanced performance of the FSCDL-CMFDA method, a comprehensive outcome analysis is conducted using the MNIST and CIFAR datasets. The experimental validation of the FSCDL-CMFDA method portrayed a superior accuracy value of 98.12% over existing models. Full article
(This article belongs to the Special Issue Signal and Image Processing Applications in Artificial Intelligence)
Show Figures

Figure 1

16 pages, 3617 KB  
Article
KD-Net: Continuous-Keystroke-Dynamics-Based Human Identification from RGB-D Image Sequences
by Xinxin Dai, Ran Zhao, Pengpeng Hu and Adrian Munteanu
Sensors 2023, 23(20), 8370; https://doi.org/10.3390/s23208370 - 10 Oct 2023
Cited by 2 | Viewed by 2169
Abstract
Keystroke dynamics is a soft biometric based on the assumption that humans always type in uniquely characteristic manners. Previous works mainly focused on analyzing the key press or release events. Unlike these methods, we explored a novel visual modality of keystroke dynamics for [...] Read more.
Keystroke dynamics is a soft biometric based on the assumption that humans always type in uniquely characteristic manners. Previous works mainly focused on analyzing the key press or release events. Unlike these methods, we explored a novel visual modality of keystroke dynamics for human identification using a single RGB-D sensor. In order to verify this idea, we created a dataset dubbed KD-MultiModal, which contains 243.2 K frames of RGB images and depth images, obtained by recording a video of hand typing with a single RGB-D sensor. The dataset comprises RGB-D image sequences of 20 subjects (10 males and 10 females) typing sentences, and each subject typed around 20 sentences. In the task, only the hand and keyboard region contributed to the person identification, so we also propose methods of extracting Regions of Interest (RoIs) for each type of data. Unlike the data of the key press or release, our dataset not only captures the velocity of pressing and releasing different keys and the typing style of specific keys or combinations of keys, but also contains rich information on the hand shape and posture. To verify the validity of our proposed data, we adopted deep neural networks to learn distinguishing features from different data representations, including RGB-KD-Net, D-KD-Net, and RGBD-KD-Net. Simultaneously, the sequence of point clouds also can be obtained from depth images given the intrinsic parameters of the RGB-D sensor, so we also studied the performance of human identification based on the point clouds. Extensive experimental results showed that our idea works and the performance of the proposed method based on RGB-D images is the best, which achieved 99.44% accuracy based on the unseen real-world data. To inspire more researchers and facilitate relevant studies, the proposed dataset will be publicly accessible together with the publication of this paper. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

23 pages, 5380 KB  
Article
SoftVein-WELM: A Weighted Extreme Learning Machine Model for Soft Biometrics on Palm Vein Images
by David Zabala-Blanco, Ruber Hernández-García and Ricardo J. Barrientos
Electronics 2023, 12(17), 3608; https://doi.org/10.3390/electronics12173608 - 26 Aug 2023
Cited by 7 | Viewed by 1938
Abstract
Contactless biometric technologies such as palm vein recognition have gained more relevance in the present and immediate future due to the COVID-19 pandemic. Since certain soft biometrics like gender and age can generate variations in the visualization of palm vein patterns, these soft [...] Read more.
Contactless biometric technologies such as palm vein recognition have gained more relevance in the present and immediate future due to the COVID-19 pandemic. Since certain soft biometrics like gender and age can generate variations in the visualization of palm vein patterns, these soft traits can reduce the penetration rate on large-scale databases for mass individual recognition. Due to the limited availability of public databases, few works report on the existing approaches to gender and age classification through vein pattern images. Moreover, soft biometric classification commonly faces the problem of imbalanced data class distributions, representing a limitation of the reported approaches. This paper introduces weighted extreme learning machine (W-ELM) models for gender and age classification based on palm vein images to address imbalanced data problems, improving the classification performance. The highlights of our proposal are that it avoids using a feature extraction process and can incorporate a weight matrix in optimizing the ELM model by exploiting the imbalanced nature of the data, which guarantees its application in realistic scenarios. In addition, we evaluate a new class distribution for soft biometrics on the VERA dataset and a new multi-label scheme identifying gender and age simultaneously. The experimental results demonstrate that both evaluated W-ELM models outperform previous existing approaches and a novel CNN-based method in terms of the accuracy and G-mean metrics, achieving accuracies of 98.91% and 99.53% for gender classification on VERA and PolyU, respectively. In more challenging scenarios for age and gender–age classifications on the VERA dataset, the proposed method reaches accuracies of 97.05% and 96.91%, respectively. The multi-label classification results suggest that further studies can be conducted on multi-task ELM for palm vein recognition. Full article
Show Figures

Figure 1

29 pages, 14418 KB  
Article
Classification of Ethnicity Using Efficient CNN Models on MORPH and FERET Datasets Based on Face Biometrics
by Abdulwahid Al Abdulwahid
Appl. Sci. 2023, 13(12), 7288; https://doi.org/10.3390/app13127288 - 19 Jun 2023
Cited by 5 | Viewed by 7697
Abstract
Ethnic conflicts frequently lead to violations of human rights, such as genocide and crimes against humanity, as well as economic collapse, governmental failure, environmental problems, and massive influxes of refugees. Many innocent people suffer as a result of violent ethnic conflict. People’s ethnicity [...] Read more.
Ethnic conflicts frequently lead to violations of human rights, such as genocide and crimes against humanity, as well as economic collapse, governmental failure, environmental problems, and massive influxes of refugees. Many innocent people suffer as a result of violent ethnic conflict. People’s ethnicity can pose a threat to their safety. There have been many studies on the topic of how to categorize people by race. Until recently, the majority of the work on face biometrics had been conducted on the problem of person recognition from a photograph. However, other softer biometrics such as a person’s age, gender, race, or emotional state are also crucial. The subject of ethnic classification has many potential uses and is developing rapidly. This study summarizes recent advances in ethnicity categorization by utilizing efficient models of convolutional neural networks (CNNs) and focusing on the central portion of the face alone. This article contrasts the results of two distinct CNN models. To put the suggested models through their paces, the study employed holdout testing on the MORPH and FERET datasets. It is essential to remember that this study’s results were generated by focusing on the face’s central region alone, which saved both time and effort. Classification into four classes was achieved with an accuracy of 85% using Model A and 86% using Model B. Consequently, classifying people according to their ethnicity as a fundamental part of the video surveillance systems used at checkpoints is an excellent concept. This categorization statement may also be helpful for picture-search queries. Full article
Show Figures

Figure 1

33 pages, 5606 KB  
Article
Compact-Fusion Feature Framework for Ethnicity Classification
by Tjokorda Agung Budi Wirayuda, Rinaldi Munir and Achmad Imam Kistijantoro
Informatics 2023, 10(2), 51; https://doi.org/10.3390/informatics10020051 - 12 Jun 2023
Cited by 3 | Viewed by 2402
Abstract
In computer vision, ethnicity classification tasks utilize images containing human faces to extract ethnicity labels. Ethnicity is one of the soft biometric feature categories useful in data analysis for commercial, public, and health sectors. Ethnicity classification begins with face detection as a preprocessing [...] Read more.
In computer vision, ethnicity classification tasks utilize images containing human faces to extract ethnicity labels. Ethnicity is one of the soft biometric feature categories useful in data analysis for commercial, public, and health sectors. Ethnicity classification begins with face detection as a preprocessing process to determine a human’s presence; then, the feature representation is extracted from the isolated facial image to predict the ethnicity class. This study utilized four handcrafted features (multi-local binary pattern (MLBP), histogram of gradient (HOG), color histogram, and speeded-up-robust-features-based (SURF-based)) as the basis for the generation of a compact-fusion feature. The compact-fusion framework involves optimal feature selection, compact feature extraction, and compact-fusion feature representation. The final feature representation was trained and tested with the SVM One Versus All classifier for ethnicity classification. When it was evaluated in two large datasets, UTKFace and Fair Face, the proposed framework achieved accuracy levels of 89.14%, 82.19%, and 73.87%, respectively, for the UTKFace dataset with four or five classes and the Fair Face dataset with four classes. Furthermore, the compact-fusion feature with a small number of features at 4790, constructed based on conventional handcrafted features, achieved competitive results compared with state-of-the-art methods using a deep-learning-based approach. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

13 pages, 1462 KB  
Article
A Fast Deep Learning ECG Sex Identifier Based on Wavelet RGB Image Classification
by Jose-Luis Cabra Lopez, Carlos Parra and Gonzalo Forero
Data 2023, 8(6), 97; https://doi.org/10.3390/data8060097 - 29 May 2023
Cited by 4 | Viewed by 3304
Abstract
Human sex recognition with electrocardiogram signals is an emerging area in machine learning, mostly oriented toward neural network approaches. It might be the beginning of a field of heart behavior analysis focused on sex. However, a person’s heartbeat changes during daily activities, which [...] Read more.
Human sex recognition with electrocardiogram signals is an emerging area in machine learning, mostly oriented toward neural network approaches. It might be the beginning of a field of heart behavior analysis focused on sex. However, a person’s heartbeat changes during daily activities, which could compromise the classification. In this paper, with the intention of capturing heartbeat dynamics, we divided the heart rate into different intervals, creating a specialized identification model for each interval. The sexual differentiation for each model was performed with a deep convolutional neural network from images that represented the RGB wavelet transformation of ECG pseudo-orthogonal X, Y, and Z signals, using sufficient samples to train the network. Our database included 202 people, with a female-to-male population ratio of 49.5–50.5% and an observation period of 24 h per person. As our main goal, we looked for periods of time during which the classification rate of sex recognition was higher and the process was faster; in fact, we identified intervals in which only one heartbeat was required. We found that for each heart rate interval, the best accuracy score varied depending on the number of heartbeats collected. Furthermore, our findings indicated that as the heart rate increased, fewer heartbeats were needed for analysis. On average, our proposed model reached an accuracy of 94.82% ± 1.96%. The findings of this investigation provide a heartbeat acquisition procedure for ECG sex recognition systems. In addition, our results encourage future research to include sex as a soft biometric characteristic in person identification scenarios and for cardiology studies, in which the detection of specific male or female anomalies could help autonomous learning machines move toward specialized health applications. Full article
(This article belongs to the Special Issue Signal Processing for Data Mining)
Show Figures

Figure 1

13 pages, 1991 KB  
Article
Deep Learning Pet Identification Using Face and Body
by Elham Azizi and Loutfouz Zaman
Information 2023, 14(5), 278; https://doi.org/10.3390/info14050278 - 8 May 2023
Cited by 5 | Viewed by 10120
Abstract
According to the American Humane Association, millions of cats and dogs are lost yearly. Only a few thousand of them are found and returned home. In this work, we use deep learning to help expedite the procedure of finding lost cats and dogs, [...] Read more.
According to the American Humane Association, millions of cats and dogs are lost yearly. Only a few thousand of them are found and returned home. In this work, we use deep learning to help expedite the procedure of finding lost cats and dogs, for which a new dataset is collected. We applied transfer learning methods on different convolutional neural networks for species classification and animal identification. The framework consists of seven sequential layers: data preprocessing, species classification, face and body detection with landmark detection techniques, face alignment, identification, animal soft biometrics, and recommendation. We achieved an accuracy of 98.18% on species classification. In the face identification layer, 80% accuracy was achieved. Body identification resulted in 81% accuracy. When using body identification in addition to face identification, the accuracy increased to 86.5%, with a 100% chance that the animal would be in our top 10 recommendations of matching. By incorporating animals’ soft biometric information, the system can identify animals with 92% confidence. Full article
Show Figures

Figure 1

14 pages, 3182 KB  
Article
Locality-Sensitive Hashing of Soft Biometrics for Efficient Face Image Database Search and Retrieval
by Ameerah Abdullah Alshahrani and Emad Sami Jaha
Electronics 2023, 12(6), 1360; https://doi.org/10.3390/electronics12061360 - 13 Mar 2023
Cited by 3 | Viewed by 3039
Abstract
As multimedia technology has advanced in recent years, the use of enormous image libraries has dramatically expanded. In applications for image processing, image retrieval has emerged as a crucial technique. Content-based face image retrieval is a well-established technology in many real-world applications, such [...] Read more.
As multimedia technology has advanced in recent years, the use of enormous image libraries has dramatically expanded. In applications for image processing, image retrieval has emerged as a crucial technique. Content-based face image retrieval is a well-established technology in many real-world applications, such as social media, where dependable retrieval capabilities are required to enable quick search among large numbers of images. Humans frequently use faces to recognize and identify individuals. Face recognition from official or personal photos is becoming increasingly popular as it can aid crime detectives in identifying victims and criminals. Furthermore, a large number of images requires a large amount of storage, and the process of image comparison and matching, consequently, takes longer. Hence, the query speed and low storage consumption of hash-based image retrieval techniques have garnered a considerable amount of interest. The main contribution of this work is to try to overcome the challenge of performance improvement in image retrieval by using locality-sensitive hashing (LSH) for retrieving top-matched face images from large-scale databases. We use face soft biometrics as a search input and propose an effective LSH-based method to replace standard face soft biometrics with their corresponding hash codes for searching a large-scale face database and retrieving the top-k of the matching face images with higher accuracy in less time. The experimental results, using the Labeled Faces in the Wild (LFW) database together with the corresponding database of attributes (LFW-attributes), show that our proposed method using LSH face soft biometrics (Soft BioHash) improves the performance of face image database search and retrieval and also outperforms the LSH hard face biometrics method (Hard BioHash). Full article
(This article belongs to the Special Issue Intelligent Face Recognition and Multiple Applications)
Show Figures

Figure 1

17 pages, 3859 KB  
Article
Periocular Data Fusion for Age and Gender Classification
by Carmen Bisogni, Lucia Cascone and Fabio Narducci
J. Imaging 2022, 8(11), 307; https://doi.org/10.3390/jimaging8110307 - 9 Nov 2022
Cited by 5 | Viewed by 2182
Abstract
In recent years, the study of soft biometrics has gained increasing interest in the security and business sectors. These characteristics provide limited biometric information about the individual; hence, it is possible to increase performance by combining numerous data sources to overcome the accuracy [...] Read more.
In recent years, the study of soft biometrics has gained increasing interest in the security and business sectors. These characteristics provide limited biometric information about the individual; hence, it is possible to increase performance by combining numerous data sources to overcome the accuracy limitations of a single trait. In this research, we provide a study on the fusion of periocular features taken from pupils, fixations, and blinks to achieve a demographic classification, i.e., by age and gender. A data fusion approach is implemented for this purpose. To build a trust evaluation of the selected biometric traits, we first employ a concatenation scheme for fusion at the feature level and, at the score level, transformation and classifier-based score fusion approaches (e.g., weighted sum, weighted product, Bayesian rule, etc.). Data fusion enables improved performance and the synthesis of acquired information, as well as its secure storage and protection of the multi-biometric system’s original biometric models. The combination of these soft biometrics characteristics combines flawlessly the need to protect individual privacy and to have a strong discriminatory element. The results are quite encouraging, with an age classification accuracy of 84.45% and a gender classification accuracy of 84.62%, respectively. The results obtained encourage the studies on periocular area to detect soft biometrics to be applied when the lower part of the face is not visible. Full article
(This article belongs to the Special Issue Multi-Biometric and Multi-Modal Authentication)
Show Figures

Figure 1

17 pages, 18540 KB  
Article
Robust Identification and Segmentation of the Outer Skin Layers in Volumetric Fingerprint Data
by Alexander Kirfel, Tobias Scheer, Norbert Jung and Christoph Busch
Sensors 2022, 22(21), 8229; https://doi.org/10.3390/s22218229 - 27 Oct 2022
Cited by 6 | Viewed by 3028
Abstract
Despite the long history of fingerprint biometrics and its use to authenticate individuals, there are still some unsolved challenges with fingerprint acquisition and presentation attack detection (PAD). Currently available commercial fingerprint capture devices struggle with non-ideal skin conditions, including soft skin in infants. [...] Read more.
Despite the long history of fingerprint biometrics and its use to authenticate individuals, there are still some unsolved challenges with fingerprint acquisition and presentation attack detection (PAD). Currently available commercial fingerprint capture devices struggle with non-ideal skin conditions, including soft skin in infants. They are also susceptible to presentation attacks, which limits their applicability in unsupervised scenarios such as border control. Optical coherence tomography (OCT) could be a promising solution to these problems. In this work, we propose a digital signal processing chain for segmenting two complementary fingerprints from the same OCT fingertip scan: One fingerprint is captured as usual from the epidermis (“outer fingerprint”), whereas the other is taken from inside the skin, at the junction between the epidermis and the underlying dermis (“inner fingerprint”). The resulting 3D fingerprints are then converted to a conventional 2D grayscale representation from which minutiae points can be extracted using existing methods. Our approach is device-independent and has been proven to work with two different time domain OCT scanners. Using efficient GPGPU computing, it took less than a second to process an entire gigabyte of OCT data. To validate the results, we captured OCT fingerprints of 130 individual fingers and compared them with conventional 2D fingerprints of the same fingers. We found that both the outer and inner OCT fingerprints were backward compatible with conventional 2D fingerprints, with the inner fingerprint generally being less damaged and, therefore, more reliable. Full article
(This article belongs to the Special Issue Biometric Technologies Based on Optical Coherence Tomography (OCT))
Show Figures

Figure 1

Back to TopTop