Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = multimodal biometric identification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1563 KB  
Article
Sequential Multimodal Biometric Authentication Fusion System
by Swati Rastogi, Sanoj Kumar, Musrrat Ali and Abdul Rahaman Wahab Sait
Mathematics 2026, 14(7), 1178; https://doi.org/10.3390/math14071178 - 1 Apr 2026
Viewed by 329
Abstract
The current study proposes an improved DenseNet-based Sequential Multimodal Biometric Authentication System, involving face and ear modality for better human identification. The architecture is composed of three convolutional layers and two dense layers, which are optimized for obtaining the discriminative spatial representations in [...] Read more.
The current study proposes an improved DenseNet-based Sequential Multimodal Biometric Authentication System, involving face and ear modality for better human identification. The architecture is composed of three convolutional layers and two dense layers, which are optimized for obtaining the discriminative spatial representations in 200 × 200 pixel facial and ear images. Evaluation is performed based on strict 5-fold subject disjoint cross-validation data to ensure the unbiased assessment. The model proposed attained a steady classification accuracy of 97.1 ± 0.79%, and balanced values for Precision, Recall and F1-score under controlled validation conditions, while the Performance analysis including False Acceptance (FAR), False Rejection (FRR) and Equal Error Rate (EER) showed that the EER found is around 1.05% at the optimum operating value. Comparative experiments between parallel feature concatenation and sequential verification techniques show that the sequential framework yields decreased FAR, when compared to the parallel framework, without having a detrimental effect on overall accuracy, while the Statistical validation by analysis of variance shows that the incremental architectural improvements have a significant impact on performance improvements. Findings of this analysis show a “score distribution” that both “single-trait and traditional multifactor systems” exceed the presentation of a novel method for Nex-G authentication solutions. This study advances biometric security by demonstrating how multimodal fusion may address the increasing global demand for robust and privacy-aware authentication methods, thereby setting a standard for intelligent multimodal recognition systems. Full article
Show Figures

Figure 1

34 pages, 28662 KB  
Article
Template-Driven Multimodal Face Pseudonymization for Privacy-Preserving Big Data Analytics
by Yeong Su Lee, Hendrik Bothe and Michaela Geierhos
Algorithms 2026, 19(3), 176; https://doi.org/10.3390/a19030176 - 26 Feb 2026
Viewed by 309
Abstract
Profile images from social networks are a valuable source of data for AI analytics, but they contain biometric identifiers that pose serious privacy risks. The current face anonymization techniques often destroy semantic information, and generative de-identification methods are vulnerable to re-identification attacks. In [...] Read more.
Profile images from social networks are a valuable source of data for AI analytics, but they contain biometric identifiers that pose serious privacy risks. The current face anonymization techniques often destroy semantic information, and generative de-identification methods are vulnerable to re-identification attacks. In this paper, we propose a template-driven multimodal face pseudonymization framework that allows for the privacy-preserving analysis of facial image data while retaining analytically relevant attributes. Our approach uses a FaceNet-based CelebA attribute classifier to extract fine-grained facial attributes and a DeepFace model to extract high-level demographic attributes. Rather than relying on stochastic large language models, we introduce deterministic template-based attribute-to-text conversion to ensure consistency and reproducibility and prevent unintended attribute hallucination. The resulting textual description serves as the sole conditioning input for Janus-Pro, a multimodal text-to-image generation model that synthesizes realistic yet non-identifiable face images. We evaluate our method on the CelebA dataset under a strong adversarial threat model, employing state-of-the-art face recognition systems to assess re-identification and linkability attacks. Our results demonstrate a substantial reduction in identity leakage while preserving semantic attributes. Full article
(This article belongs to the Special Issue Blockchain and Big Data Analytics: AI-Driven Data Science)
Show Figures

Figure 1

25 pages, 966 KB  
Review
Precision Livestock Farming for Dairy Sheep: A Literature Review of IoT and Decision-Support Systems for Enhanced Management and Welfare
by Maria Consuelo Mura, Othmane Trimasse, Vincenzo Carcangiu and Sebastiano Luridiana
AgriEngineering 2026, 8(2), 58; https://doi.org/10.3390/agriengineering8020058 - 6 Feb 2026
Viewed by 913
Abstract
The dairy sheep, vital to the Mediterranean economy, struggles to balance productivity, sustainability, and animal welfare, especially in extensive, small-scale systems. Precision livestock farming (PLF) technologies offer new opportunities by enabling continuous, non-invasive, and data-driven monitoring across diverse farming conditions. Despite rapid progress [...] Read more.
The dairy sheep, vital to the Mediterranean economy, struggles to balance productivity, sustainability, and animal welfare, especially in extensive, small-scale systems. Precision livestock farming (PLF) technologies offer new opportunities by enabling continuous, non-invasive, and data-driven monitoring across diverse farming conditions. Despite rapid progress in sensors, computer vision, wearable devices, and artificial intelligence (AI), a comprehensive synthesis focused on dairy sheep remains limited. This review provides an updated overview of PLF applications in dairy sheep farming, based on a literature review. The 2018–2025 timeframe was chosen to capture recent advances in Internet of Things (IoT), AI, and sensor technologies that have achieved practical relevance only in recent years. The review identifies core technological domains such as automated weight and body condition monitoring, biometric identification, wearable and IoT-based sensors, localization systems, behavioral and thermal monitoring, virtual fencing, drone-assisted herding, and advanced decision-support tools. Innovations including lightweight deep-learning models, multimodal sensing frameworks, and digital twins highlight the growing potential for scalable, real-time applications. While technological progress is substantial, practical adoption is hindered by economic, technical, interoperability, and ethical barriers. This review consolidates current evidence and identifies future priorities to guide the development of integrated, welfare-focused PLF solutions for dairy sheep farming. Full article
(This article belongs to the Special Issue New Management Technologies for Precision Livestock Farming)
Show Figures

Figure 1

30 pages, 4189 KB  
Systematic Review
Automated Fingerprint Identification: The Role of Artificial Intelligence in Crime Scene Investigation
by Csongor Herke
Forensic Sci. 2026, 6(1), 6; https://doi.org/10.3390/forensicsci6010006 - 22 Jan 2026
Viewed by 2229
Abstract
Background/Objectives: This systematic review examines how artificial intelligence (AI) is transforming fingerprint and latent print identification in criminal investigations, tracing the evolution from traditional dactyloscopy to Automated Fingerprint Identification Systems (AFISs) and AI-enhanced biometric pipelines. Methods: Following PRISMA 2020 guidelines, we [...] Read more.
Background/Objectives: This systematic review examines how artificial intelligence (AI) is transforming fingerprint and latent print identification in criminal investigations, tracing the evolution from traditional dactyloscopy to Automated Fingerprint Identification Systems (AFISs) and AI-enhanced biometric pipelines. Methods: Following PRISMA 2020 guidelines, we conducted a literature search in the Scopus, Web of Science, PubMed/MEDLINE, and legal databases for the period 2000–2025, using multi-step Boolean search strings targeting AI-based fingerprint identification; 68,195 records were identified, of which 61 peer-reviewed studies met predefined inclusion criteria and were included in the qualitative synthesis (no meta-analysis). Results: Across the included studies, AI-enhanced AFIS solutions frequently demonstrated improvements in speed and scalability and, in several controlled benchmarks, improved matching performance on low-quality or partial fingerprints, although the results varied depending on datasets, evaluation protocols, and operational contexts. They also showed a potential to reduce certain forms of examiner-related contextual bias, while remaining susceptible to dataset- and model-induced biases. Conclusions: The evidence indicates that hybrid human–AI workflows—where expert examiners retain decision making authority but use AI for candidate filtering, image enhancement, and data structuring—currently offer the most reliable model, and emerging developments such as multimodal biometric fusion, edge computing, and quantum machine learning may contribute to making AI-based fingerprint identification an increasingly important component of law enforcement practice, provided that robust regulation, continuous validation, and transparent governance are ensured. Full article
Show Figures

Figure 1

23 pages, 2725 KB  
Article
Text- and Face-Conditioned Multi-Anchor Conditional Embedding for Robust Periocular Recognition
by Po-Ling Fong, Tiong-Sik Ng and Andrew Beng Jin Teoh
Appl. Sci. 2026, 16(2), 942; https://doi.org/10.3390/app16020942 - 16 Jan 2026
Viewed by 307
Abstract
Periocular recognition is essential when full-face images cannot be used because of occlusion, privacy constraints, or sensor limitations, yet in many deployments, only periocular images are available at run time, while richer evidence, such as archival face photos and textual metadata, exists offline. [...] Read more.
Periocular recognition is essential when full-face images cannot be used because of occlusion, privacy constraints, or sensor limitations, yet in many deployments, only periocular images are available at run time, while richer evidence, such as archival face photos and textual metadata, exists offline. This mismatch makes it hard to deploy conventional multimodal fusion. This motivates the notion of conditional biometrics, where auxiliary modalities are used only during training to learn stronger periocular representations while keeping deployment strictly periocular-only. In this paper, we propose Multi-Anchor Conditional Periocular Embedding (MACPE), which maps periocular, facial, and textual features into a shared anchor-conditioned space via a learnable anchor bank that preserves periocular micro-textures while aligning higher-level semantics. Training combines identity classification losses on periocular and face branches with a symmetric InfoNCE loss over anchors and a pulling regularizer that jointly aligns periocular, facial, and textual embeddings without collapsing into face-dominated solutions; captions generated by a vision language model provide complementary semantic supervision. At deployment, only the periocular encoder is used. Experiments across five periocular datasets show that MACPE consistently improves Rank-1 identification and reduces EER at a fixed FAR compared with periocular-only baselines and alternative conditioning methods. Ablation studies verify the contributions of anchor-conditioned embeddings, textual supervision, and the proposed loss design. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 1455 KB  
Article
A User-Centric Context-Aware Framework for Real-Time Optimisation of Multimedia Data Privacy Protection, and Information Retention Within Multimodal AI Systems
by Ndricim Topalli and Atta Badii
Sensors 2025, 25(19), 6105; https://doi.org/10.3390/s25196105 - 3 Oct 2025
Cited by 2 | Viewed by 1945 | Correction
Abstract
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research [...] Read more.
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research proposes a user-centric, context-aware, and ontology-driven privacy protection framework that dynamically adjusts privacy decisions based on user-defined preferences, entity sensitivity, and contextual information. The framework integrates state-of-the-art recognition models for recognising faces, objects, scenes, actions, and emotions in real time on data acquired from vision sensors (e.g., cameras). Privacy decisions are directed by a contextual ontology based in Contextual Integrity theory, which classifies entities into private, semi-private, or public categories. Adaptive privacy levels are enforced through obfuscation techniques and a multi-level privacy model that supports user-defined red lines (e.g., “always hide logos”). The framework also proposes a Re-Identifiability Index (RII) using soft biometric features such as gait, hairstyle, clothing, skin tone, age, and gender, to mitigate identity leakage and to support fallback protection when face recognition fails. The experimental evaluation relied on sensor-captured datasets, which replicate real-world image sensors such as surveillance cameras. User studies confirmed that the framework was effective, with over 85.2% of participants rating the obfuscation operations as highly effective, and the other 14.8% stating that obfuscation was adequately effective. Amongst these, 71.4% considered the balance between privacy protection and usability very satisfactory and 28% found it satisfactory. GPU acceleration was deployed to enable real-time performance of these models by reducing frame processing time from 1200 ms (CPU) to 198 ms. This ontology-driven framework employs user-defined red lines, contextual reasoning, and dual metrics (RII/IVI) to dynamically balance privacy protection with scene intelligibility. Unlike current anonymisation methods, the framework provides a real-time, user-centric, and GDPR-compliant method that operationalises privacy-by-design while preserving scene intelligibility. These features make the framework appropriate to a variety of real-world applications including healthcare, surveillance, and social media. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

25 pages, 1072 KB  
Review
EEG-Based Biometric Identification and Emotion Recognition: An Overview
by Miguel A. Becerra, Carolina Duque-Mejia, Andres Castro-Ospina, Leonardo Serna-Guarín, Cristian Mejía and Eduardo Duque-Grisales
Computers 2025, 14(8), 299; https://doi.org/10.3390/computers14080299 - 23 Jul 2025
Cited by 5 | Viewed by 4900
Abstract
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview [...] Read more.
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as support vector machines (SVMs), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

27 pages, 3417 KB  
Article
GaitCSF: Multi-Modal Gait Recognition Network Based on Channel Shuffle Regulation and Spatial-Frequency Joint Learning
by Siwei Wei, Xiangyuan Xu, Dewen Liu, Chunzhi Wang, Lingyu Yan and Wangyu Wu
Sensors 2025, 25(12), 3759; https://doi.org/10.3390/s25123759 - 16 Jun 2025
Cited by 1 | Viewed by 1933
Abstract
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world [...] Read more.
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world environments, including viewpoint variations, clothing differences, occlusion problems, and illumination changes. This paper addresses these challenges by introducing a multi-modal gait recognition network based on channel shuffle regulation and spatial-frequency joint learning, which integrates two complementary modalities (silhouette data and heatmap data) to construct a more comprehensive gait representation. The channel shuffle-based feature selective regulation module achieves cross-channel information interaction and feature enhancement through channel grouping and feature shuffling strategies. This module divides input features along the channel dimension into multiple subspaces, which undergo channel-aware and spatial-aware processing to capture dependency relationships across different dimensions. Subsequently, channel shuffling operations facilitate information exchange between different semantic groups, achieving adaptive enhancement and optimization of features with relatively low parameter overhead. The spatial-frequency joint learning module maps spatiotemporal features to the spectral domain through fast Fourier transform, effectively capturing inherent periodic patterns and long-range dependencies in gait sequences. The global receptive field advantage of frequency domain processing enables the model to transcend local spatiotemporal constraints and capture global motion patterns. Concurrently, the spatial domain processing branch balances the contributions of frequency and spatial domain information through an adaptive weighting mechanism, maintaining computational efficiency while enhancing features. Experimental results demonstrate that the proposed GaitCSF model achieves significant performance improvements on mainstream datasets including GREW, Gait3D, and SUSTech1k, breaking through the performance bottlenecks of traditional methods. The implications of this research are significant for improving the performance and robustness of gait recognition systems when implemented in practical application scenarios. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

31 pages, 10299 KB  
Review
Livestock Biometrics Identification Using Computer Vision Approaches: A Review
by Hua Meng, Lina Zhang, Fan Yang, Lan Hai, Yuxing Wei, Lin Zhu and Jue Zhang
Agriculture 2025, 15(1), 102; https://doi.org/10.3390/agriculture15010102 - 4 Jan 2025
Cited by 23 | Viewed by 10106
Abstract
In the domain of animal management, the technology for individual livestock identification is in a state of continuous evolution, encompassing objectives such as precise tracking of animal activities, optimization of vaccination procedures, effective disease control, accurate recording of individual growth, and prevention of [...] Read more.
In the domain of animal management, the technology for individual livestock identification is in a state of continuous evolution, encompassing objectives such as precise tracking of animal activities, optimization of vaccination procedures, effective disease control, accurate recording of individual growth, and prevention of theft and fraud. These advancements are pivotal to the efficient and sustainable development of the livestock industry. Recently, visual livestock biometrics have emerged as a highly promising research focus due to their non-invasive nature. This paper aims to comprehensively survey the techniques for individual livestock identification based on computer vision methods. It begins by elucidating the uniqueness of the primary biometric features of livestock, such as facial features, and their critical role in the recognition process. This review systematically overviews the data collection environments and devices used in related research, providing an analysis of the impact of different scenarios on recognition accuracy. Then, the review delves into the analysis and explication of livestock identification methods, based on extant research outcomes, with a focus on the application and trends of advanced technologies such as deep learning. We also highlight the challenges faced in this field, such as data quality and algorithmic efficiency, and introduce the baseline models and innovative solutions developed to address these issues. Finally, potential future research directions are explored, including the investigation of multimodal data fusion techniques, the construction and evaluation of large-scale benchmark datasets, and the application of multi-target tracking and identification technologies in livestock scenarios. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

29 pages, 2031 KB  
Article
Monitoring and Analyzing Driver Physiological States Based on Automotive Electronic Identification and Multimodal Biometric Recognition Methods
by Shengpei Zhou, Nanfeng Zhang, Qin Duan, Xiaosong Liu, Jinchao Xiao, Li Wang and Jingfeng Yang
Algorithms 2024, 17(12), 547; https://doi.org/10.3390/a17120547 - 2 Dec 2024
Cited by 7 | Viewed by 2419
Abstract
In an intelligent driving environment, monitoring the physiological state of drivers is crucial for ensuring driving safety. This paper proposes a method for monitoring and analyzing driver physiological characteristics by combining electronic vehicle identification (EVI) with multimodal biometric recognition. The method aims to [...] Read more.
In an intelligent driving environment, monitoring the physiological state of drivers is crucial for ensuring driving safety. This paper proposes a method for monitoring and analyzing driver physiological characteristics by combining electronic vehicle identification (EVI) with multimodal biometric recognition. The method aims to efficiently monitor the driver’s heart rate, breathing frequency, emotional state, and fatigue level, providing real-time feedback to intelligent driving systems to enhance driving safety. First, considering the precision, adaptability, and real-time capabilities of current physiological signal monitoring devices, an intelligent cushion integrating MEMSs (Micro-Electro-Mechanical Systems) and optical sensors is designed. This cushion collects heart rate and breathing frequency data in real time without disrupting the driver, while an electrodermal activity monitoring system captures electromyography data. The sensor layout is optimized to accommodate various driving postures, ensuring accurate data collection. The EVI system assigns a unique identifier to each vehicle, linking it to the physiological data of different drivers. By combining the driver physiological data with the vehicle’s operational environment data, a comprehensive multi-source data fusion system is established for a driving state evaluation. Secondly, a deep learning model is employed to analyze physiological signals, specifically combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. The CNN extracts spatial features from the input signals, while the LSTM processes time-series data to capture the temporal characteristics. This combined model effectively identifies and analyzes the driver’s physiological state, enabling timely anomaly detection. The method was validated through real-vehicle tests involving multiple drivers, where extensive physiological and driving behavior data were collected. Experimental results show that the proposed method significantly enhances the accuracy and real-time performance of physiological state monitoring. These findings highlight the effectiveness of combining EVI with multimodal biometric recognition, offering a reliable means for assessing driver states in intelligent driving systems. Furthermore, the results emphasize the importance of personalizing adjustments based on individual driver differences for more effective monitoring. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 4451 KB  
Article
A Biometric Identification for Multi-Modal Biomedical Signals in Geriatric Care
by Yue Che, Lingyan Du, Guozhi Tang and Shihai Ling
Sensors 2024, 24(20), 6558; https://doi.org/10.3390/s24206558 - 11 Oct 2024
Cited by 2 | Viewed by 2665
Abstract
With the acceleration of global population aging, the elderly have an increasing demand for home care and nursing institutions, and the significance of health prevention and management of the elderly has become increasingly prominent. In this context, we propose a biometric recognition method [...] Read more.
With the acceleration of global population aging, the elderly have an increasing demand for home care and nursing institutions, and the significance of health prevention and management of the elderly has become increasingly prominent. In this context, we propose a biometric recognition method for multi-modal biomedical signals. This article focuses on three key signals that can be picked up by wearable devices: ECG, PPG, and breath (RESP). The RESP signal is introduced into the existing two-mode signal identification for multi-mode identification. Firstly, the features of the signal in the time–frequency domain are extracted. To represent deep features in a low-dimensional feature space and expedite authentication tasks, PCA and LDA are employed for dimensionality reduction. MCCA is used for feature fusion, and SVM is used for identification. The accuracy and performance of the system were evaluated using both public data sets and self-collected data sets, with an accuracy of more than 99.5%. The experimental data fully show that this method significantly improves the accuracy of identity recognition. In the future, combined with the signal monitoring function of wearable devices, it can quickly identify individual elderly people with abnormal conditions, provide safer and more efficient medical services for the elderly, and relieve the pressure on medical resources. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

26 pages, 501 KB  
Article
In-Depth Analysis of GAF-Net: Comparative Fusion Approaches in Video-Based Person Re-Identification
by Moncef Boujou, Rabah Iguernaissi, Lionel Nicod, Djamal Merad and Séverine Dubuisson
Algorithms 2024, 17(8), 352; https://doi.org/10.3390/a17080352 - 11 Aug 2024
Cited by 4 | Viewed by 2461
Abstract
This study provides an in-depth analysis of GAF-Net, a novel model for video-based person re-identification (Re-ID) that matches individuals across different video sequences. GAF-Net combines appearance-based features with gait-based features derived from skeletal data, offering a new approach that diverges from traditional silhouette-based [...] Read more.
This study provides an in-depth analysis of GAF-Net, a novel model for video-based person re-identification (Re-ID) that matches individuals across different video sequences. GAF-Net combines appearance-based features with gait-based features derived from skeletal data, offering a new approach that diverges from traditional silhouette-based methods. We thoroughly examine each module of GAF-Net and explore various fusion methods at the both score and feature levels, extending beyond initial simple concatenation. Comprehensive evaluations on the iLIDS-VID and MARS datasets demonstrate GAF-Net’s effectiveness across scenarios. GAF-Net achieves state-of-the-art 93.2% rank-1 accuracy on iLIDS-VID’s long sequences, while MARS results (86.09% mAP, 89.78% rank-1) reveal challenges with shorter, variable sequences in complex real-world settings. We demonstrate that integrating skeleton-based gait features consistently improves Re-ID performance, particularly with long, more informative sequences. This research provides crucial insights into multi-modal feature integration in Re-ID tasks, laying a foundation for the advancement of multi-modal biometric systems for diverse computer vision applications. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

24 pages, 2611 KB  
Article
Biometric Breakthroughs for Sustainable Travel: Transforming Public Transportation through Secure Identification
by Kristina Čižiūnienė, Margarita Prokopovič, Jurijus Zaranka and Jonas Matijošius
Sustainability 2024, 16(12), 5071; https://doi.org/10.3390/su16125071 - 14 Jun 2024
Cited by 8 | Viewed by 3880
Abstract
This study investigates the use of biometric technology in public transit to improve trip safety and effectiveness. The incorporation of biometric technology into transit networks improves efficiency and security but also poses substantial challenges of privacy, standardisation, and public acceptability. Conducted via a [...] Read more.
This study investigates the use of biometric technology in public transit to improve trip safety and effectiveness. The incorporation of biometric technology into transit networks improves efficiency and security but also poses substantial challenges of privacy, standardisation, and public acceptability. Conducted via a poll of 25 specialists in Lithuania, the objective was to assess the level of acceptability and practicality of using biometric identification for both drivers and passengers. The results suggest a divided view about the specific demographic that biometric applications should target. However, there is agreement on the considerable potential of these technologies to enhance transportation safety. Face recognition has been the favoured approach due to its non-intrusive nature and simplicity of integration. The statistical research demonstrated significant positive correlations between different biometric approaches, indicating that a multimodal strategy is effective for providing full security coverage. The research highlights the significance of resolving privacy issues, emphasising that public acceptability depends on the open management and strong safeguarding of biometric data. The findings support the deliberate use of biometric technologies in sustainable public transportation, emphasising their ability to improve safety, optimise operations, and even revolutionise the passenger experience. This emphasises the equitable examination of technology, security, and privacy in the progress of sustainable public transportation systems. Biometric technology in public transport, especially for monitoring driver health and ensuring passenger safety, is supported by experts as a means to enhance service quality, reduce accidents, and optimize route planning. Full article
(This article belongs to the Special Issue Open Urban Mobility for Efficient and Sustainable Transport)
Show Figures

Figure 1

23 pages, 4201 KB  
Article
OcularSeg: Accurate and Efficient Multi-Modal Ocular Segmentation in Non-Constrained Scenarios
by Yixin Zhang, Caiyong Wang, Haiqing Li, Xianyun Sun, Qichuan Tian and Guangzhe Zhao
Electronics 2024, 13(10), 1967; https://doi.org/10.3390/electronics13101967 - 17 May 2024
Cited by 3 | Viewed by 2868
Abstract
Multi-modal ocular biometrics has recently garnered significant attention due to its potential in enhancing the security and reliability of biometric identification systems in non-constrained scenarios. However, accurately and efficiently segmenting multi-modal ocular traits (periocular, sclera, iris, and pupil) remains challenging due to noise [...] Read more.
Multi-modal ocular biometrics has recently garnered significant attention due to its potential in enhancing the security and reliability of biometric identification systems in non-constrained scenarios. However, accurately and efficiently segmenting multi-modal ocular traits (periocular, sclera, iris, and pupil) remains challenging due to noise interference or environmental changes, such as specular reflection, gaze deviation, blur, occlusions from eyelid/eyelash/glasses, and illumination/spectrum/sensor variations. To address these challenges, we propose OcularSeg, a densely connected encoder–decoder model incorporating eye shape prior. The model utilizes Efficientnetv2 as a lightweight backbone in the encoder for extracting multi-level visual features while minimizing network parameters. Moreover, we introduce the Expectation–Maximization attention (EMA) unit to progressively refine the model’s attention and roughly aggregate features from each ocular modality. In the decoder, we design a bottom-up dense subtraction module (DSM) to amplify information disparity between encoder layers, facilitating the acquisition of high-level semantic detailed features at varying scales, thereby enhancing the precision of detailed ocular region prediction. Additionally, boundary- and semantic-guided eye shape priors are integrated as auxiliary supervision during training to optimize the position, shape, and internal topological structure of segmentation results. Due to the scarcity of datasets with multi-modal ocular segmentation annotations, we manually annotated three challenging eye datasets captured in near-infrared and visible light scenarios. Experimental results on newly annotated and existing datasets demonstrate that our model achieves state-of-the-art performance in intra- and cross-dataset scenarios while maintaining efficient execution. Full article
(This article belongs to the Special Issue Biometric Recognition: Latest Advances and Prospects)
Show Figures

Figure 1

25 pages, 5346 KB  
Article
An Improved Multimodal Biometric Identification System Employing Score-Level Fuzzification of Finger Texture and Finger Vein Biometrics
by Syed Aqeel Haider, Shahzad Ashraf, Raja Masood Larik, Nusrat Husain, Hafiz Abdul Muqeet, Usman Humayun, Ashraf Yahya, Zeeshan Ahmad Arfeen and Muhammad Farhan Khan
Sensors 2023, 23(24), 9706; https://doi.org/10.3390/s23249706 - 8 Dec 2023
Cited by 11 | Viewed by 7785
Abstract
This research work focuses on a Near-Infra-Red (NIR) finger-images-based multimodal biometric system based on Finger Texture and Finger Vein biometrics. The individual results of the biometric characteristics are fused using a fuzzy system, and the final identification result is achieved. Experiments are performed [...] Read more.
This research work focuses on a Near-Infra-Red (NIR) finger-images-based multimodal biometric system based on Finger Texture and Finger Vein biometrics. The individual results of the biometric characteristics are fused using a fuzzy system, and the final identification result is achieved. Experiments are performed for three different databases, i.e., the Near-Infra-Red Hand Images (NIRHI), Hong Kong Polytechnic University (HKPU) and University of Twente Finger Vein Pattern (UTFVP) databases. First, the Finger Texture biometric employs an efficient texture feature extracting algorithm, i.e., Linear Binary Pattern. Then, the classification is performed using Support Vector Machine, a proven machine learning classification algorithm. Second, the transfer learning of pre-trained convolutional neural networks (CNNs) is performed for the Finger Vein biometric, employing two approaches. The three selected CNNs are AlexNet, VGG16 and VGG19. In Approach 1, before feeding the images for the training of the CNN, the necessary preprocessing of NIR images is performed. In Approach 2, before the pre-processing step, image intensity optimization is also employed to regularize the image intensity. NIRHI outperforms HKPU and UTFVP for both of the modalities of focus, in a unimodal setup as well as in a multimodal one. The proposed multimodal biometric system demonstrates a better overall identification accuracy of 99.62% in comparison with 99.51% and 99.50% reported in the recent state-of-the-art systems. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Back to TopTop