Next Article in Journal
Passively Tuning the Resonant Frequency of Kinetic Energy Harvesters Using Distributed Loaded Proof Mass
Next Article in Special Issue
Infrared Multi-Scale Small-Target Detection Algorithm Based on Feature Pyramid Network
Previous Article in Journal
Using Big Data Analytics and Heatmap Matrix Visualization to Enhance Cryptocurrency Trading Decisions
Previous Article in Special Issue
Driver Attention Detection Based on Improved YOLOv5
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Palmprint Recognition: Extensive Exploration of Databases, Methodologies, Comparative Assessment, and Future Directions

by
Nadia Amrouni
1,
Amir Benzaoui
2,* and
Abdelhafid Zeroual
2
1
LIST Laboratory, University of M’Hamed Bougara Boumerdes, Avenue of Independence, Boumerdes 35000, Algeria
2
Electrical Engineering Department, University of Skikda, Skikda 21000, Algeria
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 153; https://doi.org/10.3390/app14010153
Submission received: 1 December 2023 / Revised: 20 December 2023 / Accepted: 22 December 2023 / Published: 23 December 2023

Abstract

:
This paper presents a comprehensive survey examining the prevailing feature extraction methodologies employed within biometric palmprint recognition models. It encompasses a critical analysis of extant datasets and a comparative study of algorithmic approaches. Specifically, this review delves into palmprint recognition systems, focusing on different feature extraction methodologies. As the dataset wields a profound impact within palmprint recognition, our study meticulously describes 20 extensively employed and recognized palmprint datasets. Furthermore, we classify these datasets into two distinct classes: contact-based datasets and contactless-based datasets. Additionally, we propose a novel taxonomy to categorize palmprint recognition feature extraction approaches into line-based approaches, texture descriptor-based approaches, subspace learning-based methods, local direction encoding-based approaches, and deep learning-based architecture approaches. Within each class, most foundational publications are reviewed, highlighting their core contributions, the datasets utilized, efficiency assessment metrics, and the best outcomes achieved. Finally, open challenges and emerging trends that deserve further attention are elucidated to push progress in future research.

1. Introduction

Palmprint recognition constitutes a pivotal biometric technology deployed in the identification and verification of individuals, relying on the distinctive patterns inherent in their palmprints. This method, known for its reliability and security, finds extensive applications in diverse fields, including access control, security systems, and forensic investigations [1]. The palmprint recognition methodology originated with a focus on forensic analysis of latent prints, capitalizing on the intricate and extensive textural features compared to fingerprints. In forensic applications, high-resolution images exceeding 400 dots per inch (dpi) are utilized to capture detailed structural information. In contrast, civil and commercial systems, such as access control, opt for lower resolutions under 150 dpi to balance utility and practicality. The larger surface area covered by palmprints allows for highly discriminative characterization, even in low-quality images. The resolution-field condition trade-off underscores the differing goals: Forensic usage requires definitive one-to-one matching for evidence, whereas access control emphasizes immediate user authentication and system integration. A comprehensive understanding of these contexts and their implications on image quality, feature representation, and matching algorithms is crucial for adapting palmprint recognition to diverse application needs. The process of palmprint recognition entails several fundamental stages: image capture and acquisition, preprocessing for normalization and enhancement, descriptive feature extraction, and finally, pattern matching for classification.
Palmprint image acquisition involves capturing high-quality palmprint images using various devices like cameras, scanners, or smartphones. These images are then subjected to preprocessing techniques, encompassing noise reduction, normalization, and enhancement, to ensure consistent submission despite any restrictions on the availability of materials and/or refined input data. Following preprocessing, relevant features are extracted, like minutiae points specific to an individual’s palm, ridges, and lines. These features are crucial for accurate identification and are obtained through advanced image processing methods [2].
Palmprint recognition systems possess notable advantages—notably their non-intrusive nature, stability of features over time, and the abundance of unique identifying characteristics on the palm [3]. However, challenges persist, including variations in illumination, pose, and image quality, necessitating meticulous attention for precise and dependable recognition outcomes.
Sustained research efforts, especially within image processing, machine learning, and deep learning, have significantly enhanced palmprint recognition systems. These advancements have solidified the integration of palmprint recognition as an indispensable component within contemporary biometric security applications [4].
Despite promising advancements in palmprint recognition, numerous unresolved issues and open challenges persist in the field. These challenges encompass diverse factors such as changes in pose, occlusion, blurring, image resolution, and the synthesis of palmprints [5,6]. Successfully addressing these challenges requires substantial efforts aimed at enhancing palmprint acquisition, normalization techniques, and recognition algorithms. Such efforts are crucial for deploying palmprint recognition within forensic analysis, surveillance systems, mobile phone security, and various commercial applications.
To confront and deal with these challenges, this paper provides a meticulous analysis that highlights notable advances that have significantly influenced the development of palmprint biometric recognition. This discussion covers historical evolutions up to the present, with focused attention on anticipating future directions in the field.
In this comprehensive review, our main contributions can be succinctly outlined as follows:
-
We deliver a timely, thorough, and concise review of the extensive literature on image-based palmprint recognition. This includes an analysis of both contact and contactless palmprint databases, along with the employed evaluation methods. Our evaluation covers 20 databases and more than 60 publications from late 2002 to 2023.
-
Our goal is to enlighten emerging scholars by highlighting significant advances in the historical context of the field and directing them to relevant references for in-depth exploration.
-
We present a systematic categorization of palmprint feature extraction techniques, including contemporary methods rooted in deep learning. The purpose of developing this taxonomy is to structure the existing literature on palmprint recognition approaches and provide a coherent framework for understanding the diverse methodologies employed in the field.
-
We provide an up-to-date and thorough comprehensive survey of both contact and contactless databases utilized in the realm of palmprint recognition. Our methodology involves organizing these databases and building a chronological timeline, showing the evolution of these datasets over time in terms of the number of individuals represented and the number of samples per individual.
-
We scrutinize the existing deep learning-based methodologies, highlighting their exceptional performance on intricate, unregulated, and extensive datasets. Consequently, our examination offers researchers with an inclusive understanding of deep learning-based techniques, which have significantly transformed palmprint recognition paradigms since the early 2015s.
This paper’s subsequent sections are outlined as follows: Section 2 delves into the anatomical structure of the palm of the hand. Then, Section 3 focuses on the palmprint as a distinctive biometric modality. In Section 4, a comprehensive overview of the general framework for palmprint recognition systems is presented. The challenges inherent in the implementation and the employment of palmprint recognition systems are discussed in Section 5. Section 6 introduces palmprint recognition databases and the proposed taxonomy. Section 7 provides a meticulous classification of palmprint feature extraction methods, detailing the pivotal contributions that have significantly advanced this field. Finally, Section 8 serves as the conclusion, summarizing key findings and addressing future research directions.

2. What Is a Palmprint?

The palmprint refers to the unique pattern of ridges and valleys found on the inner surface of the hand, excluding the wrist and fingers. Palmprints, akin to fingerprints, are a biometric characteristic unique to each individual. A seminal study by Shu and Zhang in 1998 [7] explored the viability of palmprints as a means of personal identification, establishing them as a form of physical biometrics. Their findings highlighted distinctive features in palmprints, including major lines (life, heart, and head lines), wrinkles, minutiae, and delta points. Each palmprint is unique, and the surface of the palm provides more information space compared to fingerprints, so it contains a greater amount of information.
In general, the attributes of the palmprint manifest on multiple levels, each discernible in various types of palmprint images. Typically, these characteristics are visible across different image resolutions, with lower resolution, around 100 pixels per inch (ppi) [8,9], exhibiting a pronounced texture in which dark lines are of particular significance and visibility. Notably, among these lines, the three widest and longest are termed major lines, constituted by the heart line gathering with the head and life lines, and the remaining lines are referred to as wrinkles [10], as illustrated in Figure 1. Therefore, in the case of low-resolution images, the predominant features are the major lines, wrinkles, and texture. Nevertheless, the edges of the palmprint remain imperceptible in images of low resolution. In contrast, visibility can be achieved in the case of images with high resolution, of approximately 500 dpi, which unveils local texture intricacies, including minute creases, ridges, valleys, and minutiae points [10]. Furthermore, images with very high resolution allow an abundance of certain local particular features related to the palmprint to be visualized, including the pores, which can be seen in resolutions exceeding 500 ppi or even reaching 1000 ppi.

3. Why Palmprint Recognition?

In the field of biometric recognition, facial identification still has limitations due to persistent challenges, such as pose, lighting, and orientation variations [11]. Conversely, fingerprints have been widely adopted due to their efficiency, although certain populations, such as manual workers and the elderly, may have difficulty with capturing fingerprints. In a networked society, reliable personal authentication remains critical for security [12]. Compared to other biometric modalities, palmprints have proven to be more effective and acceptable. The palmprint biometric system offers higher accuracy than fingerprints and higher acceptance than facial recognition. With characteristics such as uniqueness, reliability, and security, palmprints have been widely adopted by security agencies, providing a cost-effective and non-intrusive option for developing accurate and efficient biometric systems.
Advanced research in palmprint feature extraction [13,14] has been conducted for contactless systems. Contactless palmprint recognition aims to improve usability and privacy. However, the lack of a knuckle guide can lead to variations in palmprint images due to hand movements. Various methods, such as the utilization of texture operators like local binary pattern (LBP) [15] and Gabor filters [16], were proposed to overcome these challenges.
Palmprint has advantages over other biometric methods, including iris and fingerprint, in terms of identity matching. Palmprints offer the advantage of easy capture with low-resolution devices, mitigating the high costs associated with other modalities. Moreover, law enforcement agencies have extensively employed palmprints for criminal identification, leveraging their unique and stable characteristics [17,18]. These prints encapsulate diverse features like primary lines, minutiae points, ridges, and overall texture. Each feature class contributes significantly to the individuality and discriminative power of a palmprint. This flexibility permits adaptation to the specific security requirements of individuals and organizations.

4. Structure of a Palmprint Recognition System

As delineated in Figure 2, the palmprint recognition framework structure encompasses four key stages similar to broader biometric architectures—(i) image acquisition, followed by (ii) preprocessing and (iii) feature extraction, and culminating with (iv) classification [19]. The preprocessing stage aims to enhance image performance and remove extraneous elements. Then, the process moves to the feature extraction stage, which allows features to be elicited from the image of the palm through advanced image analytics. Finally, the image proceeds to the classification stage, where it undergoes classification to match image samples with individuals and identify the closest match in the database to the palmprint used in the test.

4.1. Image Acquisition

Contingent on the imaging apparatus, palmprint acquisition methodologies bifurcate into contact-based palmprint images and contactless ones [14]. The first category necessitates direct palm-sensor contact; however, the second one involves no direct physical contact. Indeed, in the first case, images are acquired with palms placed on the device and hands guided by positioning markers for the user. Conversely, in the second scenario, images are captured without any physical contact with the device. Figure 3 illustrates both modes of palmprint image acquisition, with and without contact [20,21].

4.2. Preprocessing

Image preprocessing involves denoising and smoothing the region of interest (ROI) in the entered data before deriving significant features within the palmprint images. ROI extraction in palmprint recognition adverts to the process of identifying and isolating the specific area of a palmprint image that contains the most relevant and distinctive features for recognition purposes [22]. This extraction is critical for accurate feature analysis and comparison within a recognition system. Various techniques are used to extract the ROI, which typically involves locating the central area of the palmprint image where key features like lines, ridges and minutiae points are concentrated.
The ROI extraction process aims to enhance the ability and the fineness of palmprint recognition systems by focusing computational efforts on the most informative part of the palmprint image. This targeted approach ensures that only the relevant features are considered during feature extraction and comparison, resulting in more reliable and accurate recognition results. Proper ROI extraction methods are essential for achieving optimal performance in palmprint recognition systems, making it a fundamental step in the overall recognition process.
Figure 4 highlights an example of the preprocessing module of the palmprint identification system, comprising five essential stages [23].

4.3. Feature Extraction

Feature extraction captures distinctive features from biometric data to create a unique digital representation of the palmprint. Algorithms transform raw data into discriminative features used for identification or verification. These features must be invariant to irrelevant variation and highlight fundamental characteristics. The following methods are most commonly used in palmprint feature extractison: line-based, subspace learning-based, local direction encoding-based, texture descriptor-based, and deep learning-based methods. This phase forms the core of this article and will be presented in more detail in Section 7.

4.4. Classification

During recognition, the features derived from the entered palmprint are compared with the features stored in a database. Various matching algorithms, such as Euclidean distance or neural networks, are used to determine the similarity between the input features and the stored templates.

4.5. Evaluation Performances

The valid accuracy and equal error rate (EER) serve as widely accepted metrics for evaluating the performance of biometric systems. These metrics are fundamental in judging the effectiveness of such systems and are commonly utilized in the field. Valid accuracy assesses the overall correctness of the system in authenticating users, indicating its capability to accurately identify legitimate users. Conversely, the EER stand as a pivotal metric in evaluating biometric system performance. It pinpoints the precise operating condition where the false acceptance rate (FAR) and false rejection rate (FRR) converge equally, signifying an equilibrium assessment of the system performance.
The valid accuracy is the percentage of correctly accepted genuine instances or positive matches in the biometric system, which can be obtained through:
V a l i d a c c u r a c y = N u m b e r   o f   t r u e   a c c e p t a n c e s T o t a l   n u m b e r   o f   g e n u i n e   i n s t a n c e s × 100
where:
True acceptances: the number of instances where the biometric system correctly accepts a genuine user.
Total number of genuine instances: the total number of instances where a genuine user attempts authentication.
The EER serves as an equilibrium point between the two error rates, which allows the FAR and FRR to intersect.
E E R = ( F A R + F R R ) / 2
where:
F A R = N u m b e r   o f   f a l s e   a c c e p t a n c e s T o t a l   n u m b e r   o f   i m p o s t o r   a t t e m p t s × 100
F R R = N u m b e r   o f   f a l s e   r e j e c t i o n s T o t a l   n u m b e r   o f   g e n u i n e   a t t e m p t s × 100
It is important to note that lower EER values indicate better performance in terms of balancing the false acceptance rate (FAR) and false rejection rate (FRR).
In an ideal system, the recognition rate would be 100% and the EER would be 0%. However, in practice, there is often a trade-off between these two metrics, and system designers aim to find a balance that meets the requirements of the specific application.

5. Palmprint Biometric Recognition Challenges

Palmprint recognition presents a number of complex challenges, primarily due to reduced pattern quality, variations in focal length, nonlinear deformation caused by contactless image capture systems, and computational complexity caused by the large size of typical palmprint images. In addition, contactless palmprint research ffaces specific problems [24]. First, the accuracy of contactless palmprint matching tends to decrease compared to contact images due to more pronounced image variations. This requires the development of advanced matching techniques to improve accuracy. Second, automated recognition of contactless palmprints from entered hands is complex due to dynamic or unstable backgrounds. Existing research addresses this problem by using fixed backgrounds for image acquisition and pixel-wise operators for key point detection.
Palmprint recognition shares several common problems with traditional fingerprint recognition, including the detection of ridges, valleys, and minutiae points [25]. However, palmprints are larger and more complex, which slows down their recognition at high resolutions. The deformation of fingerprints, especially due to joint variations, is a critical and more complex issue compared to the deformation of palmprints.
Different regions of palmprints exhibit varying qualities and levels of uniqueness. Computational challenges arise from the fact that databases are not always maintained in the same coordinate system during palmprint operations [26]. This affects minutiae-matching algorithms, which become less effective for palmprints due to their higher density.
All biometric systems face challenges in accuracy, scalability, and usability, and improving accuracy relies on strategies such as the use of multimodal biometric systems [27]. In the realm of contactless palmprint recognition, challenges include the degradation of matching accuracy and automatic image detection. Advanced approaches and fixed backgrounds are needed to address these issues.

6. Databases

Palmprint images can be captured using both contact and non-contact (contactless) methods. Contact-based palmprint capture requires subjects to place their hands in direct contact with a sensor mounted on pins to ensure proper positioning for image capture, as shown in Figure 5a. Conversely, contactless capture can be achieved using readily available commercial cameras and under unrestricted conditions, as shown in Figure 5b. The latter approach offers significant advantages over contact-based methods, including enhanced user convenience, increased confidentiality, and reduced hygiene risks [10]. Depending on the method used to capture the palm images, we classified the available databases into contact and non-contact databases, as shown in Figure 6.

6.1. Contact-Based Palmprint Databases

6.1.1. PolyU (2003)

The Hong Kong Polytechnic University (PolyU) database [9] encompasses 7752 gray-scale images of palmprint from 386 palms, representing 193 individuals. Each participant contributed a minimum of 20 samples, covering both the left and the right hand. The collection process was performed in two separate sessions, with approximately 10 samples collected during the initial session and the remaining 10 samples gathered in the subsequent session.

6.1.2. PolyU-MS (2009)

The PolyU-MS (Multi-Spectral) dataset [28] was created by collecting features of palmprints captured from 250 subjects spanning 20–60 years of age and encompassing 55 females and 195 males. Data gathering employed a customized multi-illumination apparatus to acquire six samples per session over two intervals under four spectral exposures (red, green, blue, and near-infrared wavelengths). This dual-session collection strategy, combined with multi-spectral imaging, resulted in a total of 24,000 images distributed across 6000 prints per band, with each contributing hand providing 12 palmprint images.

6.1.3. IIITDMJ (2015)

The palmprint images within the Indian Institute of Information Technology, Design and Manufacturing, Jabalpur (IIITDMJ) [29] database were notably affected by palm movement and distortion. This database comprises 900 gray-scale palmprint images, sourced from 75 IIITDM Jabalpur students, with six images captured per palm during a single session. The acquisition device was deliberately kept unconstrained in terms of rotation and translation to ensure natural appearances during the capture process.

6.1.4. BJTU_PalmV1 (2019)

The BJTU_PalmV1 dataset [30] is a compilation of contact-based palmprints, featuring 2431 hand images from 174 participants. This dataset encompasses a diverse group, with 77 males and 77 females all falling within the age range of 19 to 40 years. In this dataset, 98 subjects provided 10 images of their right hands, while 4 participants provided 9–10 images of their left hands and the remaining 72 individuals contributed 8–10 images of their left hands and 4–10 images of their right hands. The data collection occurred over two sessions, with two to five images captured for each hand during each session. The images were taken using two different CCD cameras in indoor settings, with the hand positioned 0.35 m away from the camera. Unintentional environmental changes occurred due to the variable number of images taken during each session. The dataset includes individuals primarily from the Institute of Information Science at Beijing Jiao-Tong University. All images were normalized to a size of 1792 × 1200 pixels.

6.1.5. PV_790 (2020)

The PV_790 dataset [31] was captured utilizing a near-infrared (NIR) imaging device operating within the 790 nm wavelength band. This dataset was meticulously compiled with the cooperation of 209 volunteers. In two separate sessions held a month apart, both the left and right hands of each participant were captured ten times, with five images obtained during each session. Consequently, the dataset comprises a total of 518 unique classes and contains 5180 images (209 subjects × 2 hands × 10 samples).

6.1.6. COEP

The palmprint database curated by the College of Engineering, Pune [32], denoted as the COEP palmprint database, encompasses eight distinct images capturing individual palm impressions. This repository aggregates a sum of 1344 palmprint images, originating from 168 distinct individuals. The dimensions of these palmprint images are specified as 1600 × 1200, with variations in the ROI size ranging from 290 × 290 to 330 × 330.
Table 1 provides a statistical summary of the contact datasets discussed in this subsection, while Figure 7 illustrates examples from each dataset discussed.

6.2. Contactless-Based Palmprint Databases

6.2.1. CASIA (2005)

The CASIA (Chinese Academy of Sciences, Institute of Automation) palmprint database [33] comprises a dataset of 5502 images obtained from 312 persons. For each subject, both the left and right palmprints were systematically collected, providing a comprehensive dataset for palmprint recognition research. Notably, there were no specific instructions given regarding the posture or hand placement during data capture, leading to considerable variations in palmprint postures within the CASIA database.

6.2.2. CASIA-MS (2007)

The CASIA Multi-Spectral (CASIA-MS) corpus [34] constitutes a substantial contactless palmprint compilation encompassing 7200 images gleaned from 100 distinct cases. Data acquisition utilized a customized multi-illumination imaging apparatus to capture six distinct samples from each palm across two independent sessions. This process was conducted under varying electromagnetic exposures, including 460 nm, 630 nm, 700 nm, 850 nm, 940 nm, and white light channels. Notably, these palmprint acquisitions occurred without any peg limitation, and deliberate measures were taken to introduce diverse variations intentionally. This purposeful introduction of variability amplifies the diversity of samples within a class and emulates practical usage scenarios, thereby rendering the CASIA-MS database highly valuable for advanced research in palmprint recognition.

6.2.3. IITD (2008)

The Indian Institute of Technology Delhi (IITD) palmprint database [35] offers a rich resource for research and development in palmprint recognition, featuring a diverse collection of 2601 images captured from 460 unique palms. This dataset comprises contributions from 230 cases ranging in age from 14 to 15 years. The acquisition methodology stipulated the contribution of five to six samples from both the left and right palms of each individual. The ROI images supplied in the dataset are standardized to a size of 150 × 150 pixels.

6.2.4. GPSD (2011)

The GPDS palmprint database [36], provided by the Digital Signal Processing Group at the University of Las Palmas de Gran Canaria, contains 1000 images of the right hand of 100 individuals. During the data collection process, a volunteer was instructed to take ten photographs of the participants’ palms without adhering to a specific hand position. All palmprint images were acquired in a single session to maintain an uncontrolled environment with varying backgrounds and lighting conditions. Additionally, ROI information is provided for each palmprint in the database.

6.2.5. Cross-Sensor (2012)

The Cross-Sensor touchless palmprint database [37], also known as the Chinese Academy of Science—HeFei Institutes of Physical Science (CASHF), is a dataset containing 12,000 images captured by three distinct devices: one digital camera and two mobile phones. Each device contributed a total of 4000 palmprint images sourced from 200 palms belonging to 100 persons. The data collection process involved capturing 20 samples for each palm across two sessions, with 10 samples acquired during each session. This dataset is designed to facilitate research and development in touchless palmprint recognition.

6.2.6. REST (2016)

The hand database known as REgim Sfax Tunisia (REST [38]), curated by the Research Groups in Intelligent Machines at the University of Sfax, constitutes 1945 samples procured from 358 persons spanning 6–70 years old. Here, data acquisition relied on an affordable 24-bit color 2048 × 1536 CMOS camera under ambient indoor illumination without appendage constraints. Unlike CASIA and IITD, REST exhibits less-constrained hand positions, with no specific housing provided for users’ hands and relying solely on indoor lighting. Consequently, sample variations are evident in terms of rotation, scale, illumination across samples, and translation. Nevertheless, users were mandated to maintain dorsal hand contact with a table during the imaging process.

6.2.7. TJI (2017)

The Tongji University (TJI) contact-based palmprint assemblage [14] documents samples from 300 university-affiliated volunteers wherein 192 males and 108 females contributed 10 left and right palm impressions during 2 distinct sessions. The individuals who participated in this operation were all affiliated with Tongji University as employees or students. A 61-day average inter-session interval with a 21–106-day span enabled longitudinal variability assessment. The cohort encompassed 235 susbjects aged 20–30 years old, and 65 subjects fell into the 30–50-year-old range. Hence, the corpus furnishes 12,000 images of 600 × 800 pixels across 600 distinct palms, facilitating age-related recognition research. This dataset offers a diverse collection of palm images suitable for research and development in palmprint recognition.

6.2.8. NTU-PI-v1 (2019)

To address palmprint recognition challenges in unregulated environments devoid of user cooperation, the Nanyang Technological University Palmprints from the Internet (NTU-PI-v1) dataset [18] was assembled. This dataset comprises 7781 images from 2035 unique palms pertaining to 1093 individuals of heterogeneous age, gender, and ethnicity. Manual cropping from online galleries using bounding boxes furnished samples with uncontrolled perspectives, gestures, appearances, occlusions, and backgrounds, emulating forensic use cases. Notably, there is a lack of publicly available palmprint databases specifically tailored to forensic applications. Hence, the lack of labeled contactless compilations tailored to investigative derivation necessitated Internet sourcing to approximate operational conditions involving unsupervised capture without biometric intent. Specimen sizes spanned from 30 × 30 to 1415 × 1415 pixels, with a median size resolution 115 × 115 pixels. Additionally, manually annotated landmarks and segmentations are supplied throughout the corpus of the dataset.

6.2.9. NTU-CP-v1 (2019)

The NTU-CP-v1 database [18] offers a diverse collection of 2478 contactless palm images captured from 655 distinct palms. Participants in this dataset represent a specific demographic, primarily consisting of individuals of Asian descent (Chinese, Indian, Malay), with a smaller inclusion of Caucasian and Eurasian subjects. Collection took place during two sessions in Singapore and was characterized by a non-contact setting and without strict posing criteria. Photographs were captured using cameras such as a Canon EOS 500D or a NIKON D70s, and subsequent processing involved cropping the images to focus on the hand regions. The dimensions of the hand images vary, ranging from 420 × 420 pixels to 1977 × 1977 pixels, with the most common size being 1373 × 1373 pixels. This dataset offers a diverse representation of palm images, providing valuable resources for research in palmprint recognition.

6.2.10. BJTU_PalmV2 (2019)

The BJTU_PalmV2 dataset [30] is a contactless palmprint collection featuring 2663 hand images from 148 volunteers, including 91 males and 57 females, spanning ages 8 to 73. Data were collected over two sessions from 2015 to 2017. Each participant contributed 6–10 images of both the left and right hands, with 3–5 images captured per hand in each session. The dataset was acquired indoors and outdoors using smartphones like the iPhone 6, Nexus 6p, Huawei Mate8, Nubia Z9, and Xiaomi Redmi 1S. Although there were no strict limitations, volunteers were instructed to naturally spread their fingers and maintain a distance of 15–25 cm from the mobile camera. Participants represented diverse occupations from China, India, Sri Lanka, and Singapore. All images were normalized to a size of 3264 × 2448 pixels.

6.2.11. MPD (2020)

The MPD dataset [39] comprises palm images captured under diverse backgrounds and various levels of lighting conditions. To mitigate the effects of different camera parameters between different brands of mobile devices, palm images were collected using two specific smartphone brands: Huawei and Xiaomi. To eliminate seasonal or temporal effects on the photos, a second round of collection was conducted using the same standard with the same set of phones six months later. The MPD includes 16,000 palmprint images from 200 volunteers from Tongji University, encompassing a balanced mix of both academic staff and student demographics. The age range spans from 20 to 50 years, with 195 subjects in the 20–30 age range and the remaining participants between 30 and 50 years old.

6.2.12. XJTU-UP (2021)

The Xi’an Jiaotong University Unconstrained Palmprint (XJTU-UP) database [40] contains >20,000 palmprint images collected from 100 persons. It was collected in an unconstrained environment, which significantly reduced the collection constraints compared to other databases, thereby increasing the convenience of the recognition system. The data were collected using five popular smartphones: iPhone 6S, HUAWEI Mate8, LG G4, Samsung Galaxy Note5, and MI8. Each device captures images under two different lighting conditions: natural room lighting and flash from the phone. The entire database is divided into ten sub-datasets named HN (HUAWEI Mate8 under natural lighting), IN (iPhone 6S under natural lighting), LN (LG G4 with natural lighting), MN (MI8 under natural lighting), SN (Samsung Galaxy Note 5 using natural light), HF (HUAWEI Mate8 under flash light), IF (iPhone 6S using flash light), LF (LG G4 under flash light), MF (MI8 under flash light), and SF (Samsung Galaxy Note5 under flash light).

6.2.13. CrossDevice-A (2022)

Images comprising CrossDevice-A [41] were sourced from the MPD and TJI datasets, captured by mobile and Internet of Things (IoT) devices, respectively. The MPD dataset comprises 400 identities and 16,000 images, whereas the TCD dataset includes 600 identities and 12,000 images. CrossDevice-A was created by selecting intersecting identities from both the TCD and MPD datasets.

6.2.14. CrossDevice-B (2022)

CrossDevice-B is a heterogenous contactless palmprint corpus [41] constituting imagery derived from the MOHI [42] and WEHI [42] hand shape datasets. Uniquely, these source datasets focused exclusively on structural hand characterization rather than friction ridge encoding, thereby presenting imagery with diminished palmprint clarity. Consequently, consolidating the distinct datasets poses significant subject-matching challenges, resulting in CrossDevice-B constituting a more realistic and arduous test than CrossDevice-A for assessing cross-device deployability.
Table 2 provides a statistical summary of the contactless datasets discussed in this subsection, while Figure 8 illustrates examples from each dataset discussed.

7. Feature Extraction Approaches

The majority of biometric recognition systems using palmprint images extract distinctive features and then compare these features to enrolled models archived in a database. We proposed a categorization of palmprint recognition approaches based on both the type of data employed and the specific strategy utilized for extracting pertinent features. This categorization divides palmprint recognition methods into five overarching classes: line-based, deep learning-based, subspace learning-based, local direction encoding-based, and texture descriptor-based methods (as shown in Figure 9).

7.1. Line-Based Approaches

Line-based methods focus on the identification and extraction of the main and local lines embedded in a palmprint in an image. These distinctive lines, which include both the prominent main ridges and the finer local ridges, serve as key features to facilitate accurate and efficient recognition processes. By strategically detecting and analyzing these lines, line-based techniques exploit the inherent uniqueness of the palm ridge pattern. The sophisticated ability of these methods to capture the intricate interplay between the major and minor lines enables the creation of highly informative palmprint templates, providing the basis for reliable and discriminatory recognition systems. This meticulous line-oriented approach adds an extra dimension of precision to palmprint recognition, making line-based methods an invaluable tool in the arsenal of biometric security mechanisms.
Li et al. (2002) [43] proposed a novel approach for palmprint identification by exploiting the power of the Fourier transform to extract and represent spatial frequency features from palmprint images. Prior to feature extraction, the palmprint images are aligned and normalized. Then, the Fourier transform acts as a bridge, seamlessly transforming the palmprint image from the spatial domain, characterized by pixel intensities, into the frequency domain and detecting major lines in the contours. Finally, the retrieved features are used to guide a multi-level search in the database for the best match to the template.
Jia et al. (2008) [44] proposed a multi-feature-based technique for palmprint recognition that combines primary line (PL) and locality preserving projection (LPP) features. The technique involves extracting the main lines from the query image and comparing them with the main lines present in each image within the training set. Next, it creates a smaller training set consisting of the images with the highest similarity scores. Finally, the technique fuses the similarity scores of the main lines and the LPP features at the decision level and recognizes the query image in the smaller training set.
Jia et al. (2013) [45] introduced a novel approach for palmprint identification named histogram of oriented lines (HOL). This technique draws inspiration from the widely used histogram of oriented gradients (HOG) technique. Unlike HOG, which primarily captures edge information, HOL delves deeper, specifically targeting and characterizing the prominent lines that define a palmprint’s unique identity through the use of a series of Gabor filters with varying orientations or modified finite random transform (MFRAT). In the matching phase, they use the Euclidean distance as the similarity measure.
Luo et al. (2016) [46] unveiled a local line directional pattern (LLDP) descriptor based on line direction space for palmprint identification. To capture the line direction features, they used methods such as modified finite radon transform (MFRAT) and the real component of the Gabor filter. For the categorization process, they used Manhattan and Chi-square distances.
Mokni et al. (2017) [47] proposed an intra-model palmprint recognition method that combines the characteristics of the major line and texture features. First, an elastic shape analysis framework was employed to explore the shape characteristics of the main line. Then, the texture information was explored using fractal analysis. Then, to improve the system accuracy, the important information from the various collected features of the main line shape and the texture pattern were merged. Finally, a random forest classifier was used to identify palmprints after combining the shape analysis-based and fractal-based features.
Gumaei et al. (2018) [48] proposed the HOG-SGF technique for palmprint identification, which combines histogram of oriented gradients (HOG) features with a steerable Gaussian filter (SGF). First, all palmprint images are preprocessed to segment only the necessary ROI. Then, the palmprint features are extracted using HOG-SGF. In the next phase, the dimensionality of the palmprint features is reduced using an efficient auto-encoder (AE). Finally, the regularized extreme learning machine (RELM) classifier is used for palm identification.
Zhou et al. (2019) [49] developed a palmprint feature extraction network founded on the double biologically inspired transform (DBIT), aiming to elucidate the mechanisms through which the optical human system perceives palmprints. This network comprises two phases, each applying dual convolutional layers succeeded by sum pooling, rectified linear unit (ReLU) activation, and normalization and combination operations. The first stage activates orientation-selective filters to elicit line and edge responses. The subsequent layer drives rotation-, scale-, and translation-invariant feature maps. Additionally, Pearson correlation and weighted fusion techniques are combined to assess the features’ discriminability and provide palmprint matching.
In order to synthesize the reviewed studies in this subsection, Table 3 offers a summary overview, outlining the various palmprint feature extraction methodologies spanning the utilized techniques, leveraged datasets, implemented experimental protocols, and key findings.

7.2. Subspace Learning-Based Approaches

Subspace learning-based methods work by extracting and assimilating key features from a palmprint image through the acquisition of a latent subspace guided by a variety of constraints. These methods go beyond traditional feature extraction techniques by dynamically learning and encapsulating the most salient aspects of palmprint patterns into a lower dimensional subspace. By incorporating constraints that include structural, statistical, and contextual information, these techniques meticulously fine-tune their subspace representations to capture the subtle intricacies of individual palmprint variations. This process effectively distills the complexity of palmprint data into a more compact and discriminative form, laying the groundwork for increased recognition accuracy. Subspace learning-based methods emerge as powerful tools to decipher the underlying palmprint data structure, allowing for the creation of highly informative templates that are adept at detecting minute differences while also accounting for broader pattern trends.
Wu et al. (2003) [50] proposed the Fisherpalm, a palmprint recognition system founded on fisher’s linear discriminant (FLD) analysis. Within this approach, every palmprint is treated as a point within a high-dimensional image space. Palmprints are then mapped from this high-dimensional space to a much lower-dimensional feature space. This transformation enhances the system’s ability to effectively discriminate between the palmprints of different individuals.
Connie et al. (2005) [2] introduced an automated system for palmprint recognition utilizing a peg-free scanner. Specifically, to deal with projection issues following the ROI extraction, principal component analysis (PCA), independent component analysis (ICA), and fisher discriminant analysis (FDA) were investigated for dimensionality reduction. Additionally, wavelet transform provided complementary multi-resolution texture characterization. The authors concluded that the configuration involving wavelet + FDA, denoted as WFDA, outperforms other configurations in terms of performance.
Hu et al. (2007) [51] proposed a technique known as 2D locality preserving projection (2DLPP) for palmprint identification. This method focuses on feature extraction and is based on the concept of locality preservation and image matrix projection. To perform the classification, they used a nearest neighbor classifier considering the L2 norm for measuring similarity.
Pan and Ruan (2008) [52] proposed an approach known as two-dimensional locality preserving projections (I2DLPP) for palmprint identification. This technique focuses on simplifying computational complexity and reducing feature dimensions by employing two main steps: First, it performs a projection of the training space along the row direction using a two-dimensional principal component analysis (2DPCA). Thereafter, I2DLPP is conducted over the resultant compressed columnar direction by employing a nearest neighbor graph. Additionally, the authors propose applying I2DLPP on Gabor-filtered images (I2DLPPG) to further enrich textural characterization identification. Extensive analysis revealed that the input filtered images significantly improved computational efficiency and identification accuracy.
Lu and Tan (2011) [53] proposed an approach called diagonal discriminant locality preserving projections (Dia-DLPP) for the identification of both faces and palmprints. This approach is crafted to capture in both directions—vertical and horizontal—by cueing discriminant information from data. Uniquely, diagonalized images were integrated during the training and testing phases to enhance the discriminative capabilities. The authors additionally proposed a weighted discriminative variant (W2D-DLPP) that explicitly assigns greater significance to more identity-discriminative pixel clusters when computing projection vectors. The discriminative scores are incorporated into the traditional 2D-DLPP technique, resulting in the refined method W2D-DLPP. This integration of discriminative pixel weighting significantly improves the identification performance of both 2D-DLPP and Dia-DLPP, leading to improved accuracy in face and palmprint identification tasks.
Rida et al. (2018) [54] introduced a palmprint recognition system that relies on a set of sparse representations (SR). They used two-dimensional principal component analysis (2D-PCA) to build an initial sample dictionary and then used two-dimensional linear discriminant analysis (2D-LDA) to extract discriminative features.
Rida et al. [55] devised an ensemble framework leveraging the random subspace method (RSM) for contactless palmprint recognition classification. Two-dimensional principal component analysis (2DPCA) was applied to obtain multiple dimensional eigenvector random subspaces. Thereafter, two-dimensional linear discriminant analysis (2DLDA) was conducted within each random 2DPCA projection to retrieve the most discriminative feature subsets. In addition, Euclidean distances with nearest neighbor classifiers were subsequently implemented on each subspace. Ultimately, a nonlinear decision function was constructed, comprising individual classifiers that vote by majority.
Wan et al. (2021) [56] introduced a feature extraction technique for palmprint recognition termed sparse 2D discriminant local preserving projection (SF2DDLPP) that integrates elasticity into the dimensionality reduction process. Their method first constructs a fuzzy membership matrix using the fuzzy k-nearest neighbors algorithm (FKNN), computed separately for within-class and between-class weight matrices to encode intra-personal and inter-personal variations. Two theorems are subsequently derived to efficiently obtain the generalized eigenfunctions for optimized class separation. Finally, elastic net regularization is utilized to determine the optimal sparse projection matrix.
Zhao et al. (2022) [57] proposed an innovative approach known as double-cohesion learning-based multi-view and discriminant palmprint recognition (DC_MDPR). This approach effectively exploits the multi-view features of palmprint images along with the inherent data structure. To achieve this goal, they introduced a method called double cohesion, which combines inter-view cohesion and intra-class cohesion. This technique aims to enhance the distinctiveness of multiple features, reduce feature dimensions, and enhance the representation of these features within the same subspace.
Wan et al. (2023) [58] proposed an approach known as low-rank two-dimensional local discriminant graph embedding (LR-2DLDGE) for the purpose of feature extraction and dimensionality reduction. Initially, the technique uses a graph embedding (GE) framework to capture and preserve essential discriminative information within local neighborhoods of the data. Next, LR-2DLDGE is designed to ensure that data points within the feature space are maximally independent across different classes, thereby enhancing discriminative capabilities. To bolster the method’s resilience against noise and corruption, the approach incorporates an L1 norm constraint and employs low-rank learning techniques.
Table 4 provides a summary of the studies discussed in this subsection, elucidating the employed feature extraction methods, datasets, experimental protocols, and principal findings.

7.3. Local Direction Encoding-Based Approaches

Methods based on local orientation coding focus on extracting and encoding the prevailing orientation information inherent in each pixel of a palmprint image. These methods differ from conventional techniques by focusing on the underlying directional characteristics of the ridge patterns, capturing the nuanced variations in orientation that contribute to the uniqueness of each palmprint. By extracting dominant directions at a local level, these methods reveal the intricate minutiae of the palm surface, going beyond traditional ridge-based representations. This wealth of directional information is then distilled into compact and informative bitwise codes, creating a powerful and discriminative coding scheme. These techniques effectively balance the precision of directional cues with the efficiency of compact coding, resulting in improved recognition performance. In the landscape of biometric authentication, these methods carve out a distinctive niche, offering a fusion of geometric understanding and efficient data representation that contributes to the development of accurate and reliable identity verification systems.
Kumar and Shen (2004) [59] proposed a palmprint recognition method based on real Gabor function (RGF) filtering. The process starts by normalizing palmprint images. Subsequently, these normalized images are subjected to multi-channel filtering using a set of RGF filters. Distinctive features, referred to as PalmCode, are computed within multiple overlapping concentric bands using each of these filtered images.
Kong et al. (2006) [60] proposed a feature-level coding method for palmprint recognition. First, palmprint image features are extracted utilizing a bank of elliptic Gabor filters. Then, a feature-level fusion technique is offered to create a single feature known as the fusion code. The normalized Hamming distance between two fusion codes is then used to determine their similarity. Finally, a dynamic threshold is applied for final judgments.
Mansoor et al. [61] proposed a multi-scale feature encoding strategy fusing contourlet transforms (CTs) and non-subsampled contourlet transforms (NSCTs) for palmprint recognition. This method aims to jointly capture localized texture details alongside global features within palmprint imagery, representing them as a compact and fixed-length palm code. The iterated directional filter banks are introduced to divide the two-dimensional spectrum into small slices. The feature vector is then formed by computing the block-wise directional energy in the transform domains. Finally, for matching, normalized Euclidean distances between vectorial codes quantify palmprint identity similarity.
Zhang et al. (2012) [62] proposed a novel approach for palmprint identification based on local direction encoding. Specifically, the authors augment native BOCV descriptors with additional “fragile bits” constituting noise-sensitive activations to derive extended BOCV (E-BOCV). A consolidated similarity metric was obtained by synergizing fragile pattern distance (FPD) with Hamming distances to capture the mismatches between two code maps.
Zhang and Gu (2013) [63] suggested employing a weighted fusion scheme integrating two-phase test sample sparse representation (TPTSR) with competitive coding methods for palmprint identification. First, a competitive coding algorithm is employed to obtain the directional matching score of two images. Thereafter, TPTSR is computed globally to match the score of the two images. Finally, the two scores are added together to categorize the test sample.
Li et al. (2014) [64] presented directed representations (DRs) for palmprint identification. First, a representation is proposed for an appearance-based technique based on multiple anisotropic filters. Subsequently, the feature extraction and the dimension reduction are guaranteed using the PCA technique. Finally, a compressed sensing classification step is implemented to distinguish between palms of different hands.
Fei et al. (2016) [65] presented a robust palmprint recognition method, the double-orientation code (DOC) approach. This technique offers a reliable way to represent palmprint orientation features, as shown through an investigation of palmprint orientation-based coding theory. They also introduced a novel nonlinear angular matching score metric for efficient similarity assessment between DOC-encoded palmprints, boosting the overall effectiveness of the technique in palmprint identification.
Xu et al. (2016) [66] introduced a palmprint identification technique called discriminative and robust competitive code (DRCC), which emphasizes discriminative and robust techniques based on dominant orientation. Their approach combines dominant orientation and lateral codes to capture important orientation features in palmprints. Strategic weighting during orientation extraction improves accuracy while using the same Gabor filters as the conventional method. This innovation holds promise for accurate and efficient palmprint orientation extraction.
Almaghtuf et al. (2020) [67] proposed a palmprint coding technique known as difference of block means (DBM). To derive the palmprint code, they followed the recommended approach: First, they computed the difference between overlapping block means of identical size within the interest area of the palmprint to extract palm-related information in both the vertical and horizontal directions. Then, vertical and horizontal codes were generated by applying thresholding to the DBM features. Finally, the Hamming distance, which is the average of the vertical and horizontal distances, was used for the matching step.
Liang et al. (2020) [68] developed a multi-feature palmprint recognition framework predicated on modeling orientation field patterns termed histograms of line mixed distances (HODlm) alongside histograms of response distances (HODr). Subsequently, the multi-feature two-phase sparse representation (MTPSR) was designed to improve the overall matching cost and to allow the handling of palmprint feature recognition.
Table 5 furnishes an overview of the studies discussed in this subsection, outlining the employed feature extraction methodologies, datasets, experimental protocols, and key findings.

7.4. Texture-Based Approaches

Texture-based methods take advantage of the intricate and diverse local features present in palmprint patterns. By exploiting these rich textural features, these methods aim to achieve higher accuracy and reliability in the identification process. Unlike traditional methods that rely solely on global features, texture-based techniques delve into the fine-grained details of the palm surface, capturing an array of minutiae such as ridges, wrinkles, and pores. This meticulous analysis enables the creation of comprehensive and distinctive palmprint templates that facilitate robust and discriminative recognition. As a result, these methods are the cornerstone of biometric authentication systems, where the intricate texture patterns of an individual’s palm provide a wealth of information for secure and accurate identity verification.
Hammami et al. (2014) [69] employed a technique involving the division of the complete palmprint image into smaller sub-regions. Within each of these sub-regions, they applied the local binary pattern (LBP) operator to capture the texture characteristics. To enhance recognition efficiency and minimize memory consumption, they introduced a selection process, which retained only the most distinctive areas for the identification task. The sequential forward floating selection (SFFS) algorithm was the basic method employed for this purpose.
Raghavendra and Busch (2015) [70] introduced an innovative and straightforward method for palmprint inspection, exploiting the distributed feature representation extracted from the bank of binarized statistical image features (B-BSIF). The BSIF specifically functions as a texture descriptor akin to the LBP, but its distinctiveness lies in its approach to acquiring filters. Unlike the LBP, which manually defines filters, BSIF filters are learned from real images.
Tamrakar and Khanna (2016) [71] introduced a palmprint recognition approach. Initially, the region of interest (ROI) is obtained from palmprint images. Subsequently, to mitigate computational costs and noise, first-level decomposition is performed on the ROI by employing the Haar wavelet. Combining this image with its block-wise histograms, which statistically summarize local variations, yields a comprehensive descriptor called the block-wise Gaussian derivative phase pattern histogram (BGDPPH). Having extracted robust features, kernel discriminant analysis (KDA) is applied to refine their discriminative power. Finally, the Euclidean distance is used for classification.
Doghmane et al. (2018) [72] proposed a local palmprint feature descriptor based on the Gabor wavelet, a local phase quantization (LPQ) descriptor and a spatial pyramid histogram (SPH) descriptor for palmprint image extraction. First, the Gabor wavelet and the LPQ are used to recover invariant blur for multiscale and multi-orientation features. The SPH is then employed for vertical decomposition to concatenate a group of local features into a large histogram called the Gabor LPQ special pyramid histogram (GLSPH) feature for each image. The GLSPH features are then projected into a whitened linear discriminant analysis (WLDA) subspace to reduce their dimensions and make them further discriminative, resulting in the DGLSPH feature. Ultimately, to deal with classification, the K-nearest neighbor (K-NN) classifier is employed in this context.
Zhang et al. (2018) [73] introduced an innovative approach to palmprint recognition that involves a two-stage process using a combination of weighted adaptive center symmetric local binary pattern (WACS-LBP) and weighted sparse representation-based classification (WSRC). Their methodology first implements WACS-LBP in an initial labeling stage to assign the test sample a limited set of feasible class labels. WSRC is then employed in the ensuing identification phase to determine the final class membership from this reduced label set. Core to their approach is the strategic conversion of the intrinsically complex complete classification problem into a more tractable task through substantial reduction of the number of output classes under consideration at each phase.
El-Tahrouni et al. (2019) [74] proposed a multispectral palmprint identification method that incorporates Pascal coefficient multispectral local binary pattern (PCMLBP) and pyramid histogram orientation gradient (PHOG) descriptors. They performed two experimental procedures. In the first procedure, only the PCMLBP descriptor was used for feature extraction. In the second procedure, PCMLBP was combined with PHOG, resulting in an improved recognition rate. PCA was then used to reduce the dimensionality of the feature vectors. Finally, random sample linear discriminant analysis (LDA) was utilized for classification.
Attallah et al. (2019) [75] introduced a palmprint identification approach that involves merging spiral features with LBP filters and selecting the optimal features using minimum redundancy maximum relevance (mRMR). This process starts by partitioning the palmprint image into smaller blocks, akin to meticulously examining individual pieces of a mosaic. Within each block, the researchers delve deeper, analyzing two crucial statistical descriptors: skewness and kurtosis. The Hamming distance is then applied to compute both inter-similarities between different blocks within the same palmprint and intra-similarities between corresponding blocks across different palmprints.
Chaudhary and Srivastava (2020) [76] proposed an approach for feature extraction in palmprint identification known as two-dimensional cochlear transform (2DCT). This method was designed to efficiently capture distinctive palmprint features. To validate the efficacy of the method, the authors performed comprehensive analyses, including both theoretical and empirical assessments. The theoretical evaluation involved demonstrating the orthogonality properties of the transform, whereas the empirical evaluation considered its performance under various challenging conditions. For the classification step, they adopted the k-nearest neighbors (KNN) algorithm, using the Euclidean distance metric for similarity assessment.
Zhang et al. (2020) [77] proposed a contactless palmprint identification and recognition technique integrating hierarchical multi-scale complete local binary patterns (HMS-CLBPs) for scale-invariant texture encoding with weighted sparse representation-based classification (WSRC) for pattern matching. Local texture descriptors are first extracted over multi-scale domains to capture fine and coarse palmprint texture. The descriptors are weighted, vectorized, and augmented to construct a highly overcomplete dictionary of training sets. Following the creation of the dictionary, it is utilized to find a sparse representation of the test sample, yielding the sparse coefficients corresponding to each dictionary atom. Palmprint image recognition is then performed by computing the reconstruction residuals between the test sample and its synthesized approximation under each class-specific sub-dictionary.
Amrouni et al. (2022) [78] introduced a feature extraction approach called multiresolution analysis. First, they applied the discrete wavelet transform (DWT) to an original palmprint image. This application facilitated the creation of multiple image representations, known as sub-bands, each with different resolutions. In addition, they utilized the local texture descriptor known as binarized statistical image features (BSIF) and applied it not only to the original image but also to the sub-bands produced by the DWT at lower resolutions. The resulting histograms from each of these levels were then merged to form a final feature vector.
Table 6 provides a summary of the studies discussed in this subsection, presenting the feature extraction methods used, the datasets employed, the experimental protocols, and the key findings.

7.5. Deep Learning-Based Approaches

In this category, the methods frequently employed convolutional neural networks (CNNs). These networks comprise convolutional layers, pooling layers, and fully connected layers, allowing for simultaneous feature extraction and classification [79,80].
Izadpanahkakhk et al. (2018) [81] proposed a system consisting of three main modules. (i) The Region of Interest Extraction Module (REM) is responsible for extracting palmprint ROIs using a bounding box approach. First, the input images are subjected to a preprocessing step. Then, a transfer learning technique using CNNs is applied to identify the optimal placement of bounding boxes on the palmprint images. This module extracts regions of interest (ROIs) from the palmprint, which serve as input for the subsequent feature extraction module. (ii) In the Feature extraction module (FEM), features are represented using a pre-trained CNN architecture. It applies the learned representations to extract discriminative features from the palmprint ROIs. (iii) The matching module (MM) takes the feature vector generated in the previous step as input and uses a machine learning classifier to perform the recognition task.
Matkowski et al. (2019) [18] developed an end-to-end deep learning approach named the end-to-end palmprint recognition network (EE-PRnet). This network comprises two fundamental components: ROI localization and alignment network (ROI-LAnet) and feature extraction and recognition network (FERnet). ROI-LAnet is tasked with transforming all input palmprint images into a consistent coordinate system and delineating the ROI containing distinguishing structural information. ROI-LAnet comprises two segments: the first is a pre-trained VGG-16 network with its top layers removed, and the second is a fully connected regression network. FERnet is tasked with extracting and recognizing palmprint features. This network is a self-contained CNN based on a modified VGG-16 architecture.
Chai et al. (2019) [30] proposed the use of two separate comprehensive CNN systems, named PalmNet and GenderNet. These networks were thoroughly trained to excel in two tasks: palmprint recognition and gender categorization, respectively. Their groundbreaking study not only demonstrated unprecedented performance in biometric recognition but also needed to validate the notion that integrating gender information could improve the accuracy of palmprint identification. To further test this concept, they created two Boost CNN networks, BoostNet-Sequential and BoostNet-Parallel, with the goal of combining the strengths of palmprint recognition and gender categorization.
Genovese et al. (2019) [82] presented PalmNet, an innovative CNN architecture. In particular, their unsupervised training eliminates the need for class labels. They also proposed a novel Gabor-based technique that uses PCA to refine adaptive filters within the CNN, thereby improving its specificity for palmprint recognition.
Zhao and Zhang (2020) [83] presented a novel approach to improve multi-scenario palmprint recognition using a versatile framework. It uses deep convolutional networks (DC-NNs), termed deep discriminative representation (DDR), to learn effective features. These features work well for palmprint recognition under various conditions. The key innovation is to train DC-NNs to extract discriminative features from palmprints that include global abstract and local compact attributes. This framework remains effective even with limited training data, providing a new avenue for advancing palmprint recognition in diverse scenarios.
Liu and Kumar (2020) [24] presented a robust and versatile deep learning framework for contactless palmprint identification. Their approach uses a fully convolutional network to generate complex residual features (RFNs), which imsproves accuracy and generalizability. A distinctive feature is the utilization of a soft-shifted triplet loss function, which enhances the learning of discriminating the features of palmprints. Additionally, they incorporated a contactless palm detector, customized and trained utilizing the prompter CNN model, for effective detection of palmprint regions across diverse backgrounds.
Liu et al. (2021) [84] introduced an end-to-end deep hashing network tailored for few-shot contactless palmprint identification, termed the similarity metric hashing network (SMHNet). Their framework integrates a structural similarity index (SSIM) module to elicit multi-scale representations encoding both holistic topology and localized texture details. A composite SSIM loss function alongside distance metrics supervises the training process for enhanced inter-class separability. Additionally, a hashing unit learns binary compact codes optimized for efficient storage and fast retrieval demonstrated significant improvement in few-shot recognition of palmprints.
Shen et al. (2022) [41] introduced a progressive target distribution loss (PTD Loss) function, which is tailored to minimize the gap between positive cross-device sample affinities and negative within-device sample relations. Additionally, the authors established a new cross-device palmprint identification dataset compositing color images sourced from multiple capture platforms.
Shao and Zhong (2022) [85] developed a deep metric learning paradigm designed for open-set contactless palmprint identification termed weight-based meta-metric learning (W2ML). Their framework strategically partitions the dataset into training and testing subsets without overlap between the two phases. The training set is further divided into multiple tasks, each comprising a support set for representation learning and a query set for few-shot generalization assessment akin to meta-learning by aggregating support sets from the task-specific subspaces into consolidated positive and negative meta-sets. The model is then trained using set-based distances between them. Additionally, hard prototype mining and weighting further enhances discrimination by identifying and prioritizing the most informative samples within each meta-set (from positive and negative). Extensive experiments demonstrated significant gains over conventional approaches, constituting a vital step towards palmprint identification systems.
Türk et al. (2023) [86] devised a hybrid palmprint recognition framework fusing deep learning and classical machine learning methodologies. Their processing pipeline starts with multiple preprocessing interventions encompassing boundary delineation, binarization, finger exclusion, edge contour extraction, noise filtering, and image thinning to maximize distinctive friction ridge information. The second step focuses on extracting the ROI for palmprint images. Thereafter, a CNN feature extractor learns hierarchical representations encoding both textural and topological traits. Finally, the CNN embeddings are classified using CNN classifier gathering with support vector machine (SVM) classifiers for gallery identity prototypes for palmprint identification.
Table 7 provides a summary of the studies discussed in this subsection, presenting the feature extraction methods used, the datasets employed, the experimental protocols, and the key findings.

7.6. Comparative Analysis

In this subsection, a comprehensive overview of the strengths and weaknesses of various palmprint identification and recognition methods, including line-based, subspace learning-based, local direction encoding-based, texture descriptor-based, and deep learning-based approaches, was presented. A summary of the methods discussed in the previous subsections is presented in Table 8. The comparative analysis between the five approaches indicates that many approaches demonstrate satisfactory performance with simple and controlled datasets. However, significant disparities in both performance and computational cost arise when dealing with large-scale and unconstrained datasets due to the challenges posed by diverse environmental conditions.
In summary, there is a compelling opportunity to develop novel and real-time models specifically tailored to unconstrained palmprint recognition. Such advancements are essential to enhance overall performance, achieve a certain level of maturity in the field, and facilitate widespread commercial deployment.

8. Future Directions

Unlike more established biometric modalities, palmprint recognition is relatively nascent, necessitating further scrutiny. Challenges and issues that are well explored in facial and fingerprint recognition demand in-depth investigation in the context of palmprint recognition. This section outlines critical research topics requiring comprehensive exploration in future endeavors. Insights on emerging ideas are provided to guide and inspire forthcoming research initiatives.

8.1. Enhancement of Palmprint Imagery through the Application of Generative Adversarial Networks

The utilization of generative adversarial networks (GANs) [87,88,89,90] in the context of palmprint recognition, specifically applied to tasks such as inpainting, enhancement, deblurring, recolorizing, and segmentation, represents a highly promising paradigm. The inherent architecture of GANs, comprising a generator and a discriminator, proves instrumental in ameliorating palmprint images by proficiently generating authentic data to address instances of missing or occluded information. This versatile application extends to mitigating challenges associated with occlusions, blurring, diminished visual quality, and noise, thereby augmenting the accuracy of palmprint biometric recognition.
It is noteworthy that the computational demands of GAN training are considerable, necessitating meticulous optimization to facilitate real-time deployment. The multifaceted integration of GANs in the palmprint recognition domain, encompassing diverse tasks, presents a comprehensive strategy poised to be strategically leveraged in the imminent future. This strategic utilization holds the potential to substantially enhance the resilience and precision of palmprint recognition systems.

8.2. Enhancing Recognition Rates and Expediting Processing time through the Exploitation of Soft Biometric Attributes

Soft biometrics denotes the utilization of non-intrusive and readily quantifiable attributes for the purpose of biometric identification. Personal characteristics such as gender, ethnicity, age, scars, marks, and tattoos exemplify instances of soft biometric traits [91,92,93,94]. Within the realm of palmprint biometrics, the integration of soft biometrics is posited to augment recognition accuracy and diminish processing time, achieved through a judicious reduction in dataset inquiries. Through the incorporation of these methodologies, it is posited that a decrease in processing time and an increase in recognition accuracy can be realized in the domain of palmprint biometrics, thereby enhancing the operational efficiency and practical applicability of the system across diverse contexts.

8.3. Utilizing Three-Dimensional Representations to Mitigate Image Acquisition Challenges

Previous palmprint research has predominantly focused on two-dimensional (2D) images, which are susceptible to environmental factors. Recognition methodologies, categorized into line, texture, subspace, and coding approaches, often compromise accuracy due to the lack of depth information in 2D representations. To address this limitation, three-dimensional (3D) palmprints offer promising biometric identification with qualities of uniqueness, stability, and universality [95,96,97,98]. However, adopting 3D palmprints introduces challenges in data volume and computational complexity. Sparse point clouds from 3D sensors impact mesh resolution and identification performance. Prolonged pre-processing time and compatibility issues with 3D recognition algorithms further complicate matters. Advanced research is crucial for optimized 3D sensors, addressing time efficiency and data volume challenges in palmprint-based recognition. Enhancing 3D sensor capabilities is imperative for efficient and accurate palmprint recognition.

8.4. Exploring Liveness Detection and Mitigating Vulnerability to Spoofing Attacks

Liveness detection and susceptibility to spoofing attacks are significant concerns in palmprint biometrics. Despite the advantages of palmprint recognition, it is not immune to deceptive practices, similar to other biometric modalities. Spoofing attacks, involving the presentation of fake biometric data such as displayed or printed images, pose challenges to verifying the authenticity of presented data, jeopardizing the privacy and security of palmprint biometrics. Despite the issue’s importance, limited attention has been given to spoofing attacks on palmprint biometrics, as evident from the sparse existing studies [99,100,101,102].
Addressing this gap is crucial and necessitates research in robust presentation attack detection algorithms tailored to palmprint-based recognition systems. The lack of suitable anti-spoofing databases compounds this challenge, underscoring the urgency to fortify the security of palmprint recognition systems against adversarial activities.

9. Conclusions

This study furnishes an exhaustive survey of the palmprint biometrics literature, encompassing benchmark datasets, challenges and impediments, assessment metrics, and predominant techniques. Specifically, a rigorous analysis and comparison of myriad approaches is conducted across five taxonomic categories of feature extraction methodologies. Furthermore, a systematic classification of the diverse palmprint datasets employed for algorithmic development and testing is provided alongside documented performances of experimental results. Additionally, the study delineates outstanding challenges necessitating immediate attention through further inquiry to advance automated palmprint identification systems. Overall, via comprehensive aggregation and the juxtaposition of factors, this taxonomic synthesis aims to edify discernment of comparative virtues and limitations underlying contemporary methodologies. It is expected that this taxonomic survey will serve as an inspiration for the research community and emerging scholars and will encourage further advances in palmprint recognition.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kong, A.; Zhang, D.; Kamel, M. A Survey of Palmprint Recognition. Pattern Recognit. 2009, 42, 1408–1418. [Google Scholar] [CrossRef]
  2. Connie, T.; Jin, A.T.B.; Ong, M.G.K.; Ling, D.N.C. An Automated Palmprint Recognition System. Image Vis. Comput. 2005, 23, 501–515. [Google Scholar] [CrossRef]
  3. Zhong, D.; Du, X.; Zhong, K. Decade Progress of Palmprint Recognition: A Brief Survey. Neurocomputing 2019, 328, 16–28. [Google Scholar] [CrossRef]
  4. Trabelsi, S.; Samai, D.; Dornaika, F.; Benlamoudi, A.; Bensid, K.; Taleb-Ahmed, A. Efficient Palmprint Biometric Identification Systems Using Deep Learning and Feature Selection Methods. Neural Comput. Appl. 2022, 34, 12119–12141. [Google Scholar] [CrossRef]
  5. Zhang, D.; Zuo, W.; Yue, F. A Comparative Study of Palmprint Recognition Algorithms. ACM Comput. Surv. 2012, 44, 1–37. [Google Scholar] [CrossRef]
  6. Zhao, S.; Fei, L.; Wen, J. Multiview-Learning-Based Generic Palmprint Recognition: A Literature Review. Mathematics 2023, 11, 1261. [Google Scholar] [CrossRef]
  7. Shu, W.; Zhang, D. Automated Personal Identification by Palmprint. Opt. Eng. 1998, 37, 2359–2362. [Google Scholar] [CrossRef]
  8. Jia, W.; Huang, D.S.; Zhang, D. Palmprint Verification Based on Robust Line Orientation Code. Pattern Recognit. 2008, 41, 1504–1513. [Google Scholar] [CrossRef]
  9. Zhang, D.; Kong, W.K.; You, J.; Wong, M. Online Palmprint Identification. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1041–1050. [Google Scholar] [CrossRef]
  10. Jain, A.K.; Feng, J. Latent Palmprint Matching. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 1032–1047. [Google Scholar] [CrossRef]
  11. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  12. Ross, A.; Banerjee, S.; Chen, C.; Chowdhury, A.; Mirjalili, V.; Sharma, R.; Yadav, S. Some Research Problems in Biometrics: The Future Beckons. In Proceedings of the 2019 International Conference on Biometrics (ICB), Crete, Greece, 4–7 June 2019; pp. 1–8. [Google Scholar]
  13. Alausa, D.W.; Adetiba, E.; Badejo, J.A.; Davidson, I.E.; Obiyemi, O.; Buraimoh, E.; Oshin, O. Contactless Palmprint Recognition System: A Survey. IEEE Access 2022, 10, 132483–132505. [Google Scholar] [CrossRef]
  14. Zhang, L.; Li, L.; Yang, A.; Shen, Y.; Yang, M. Towards Contactless Palmprint Recognition: A Novel Device, a New Benchmark, and a Collaborative Representation Based Identification Approach. Pattern Recognit. 2017, 69, 199–212. [Google Scholar] [CrossRef]
  15. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  16. Kong, W.K.; Zhang, D.; Li, W. Palmprint Feature Extraction Using 2-D Gabor Filters. Pattern Recognit. 2003, 36, 2339–2347. [Google Scholar] [CrossRef]
  17. Laadjel, M.; Kurugollu, F.; Bouridane, A.; Boussakta, S. Degraded Partial Palmprint Recognition for Forensic Investigations. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1513–1516. [Google Scholar]
  18. Matkowski, W.M.; Chai, T.; Kong, A.W.K. Palmprint Recognition in Uncontrolled and Uncooperative Environment. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1601–1615. [Google Scholar] [CrossRef]
  19. Wu, X.; Zhang, D.; Wang, K.; Huang, B. Palmprint Classification Using Principal Lines. Pattern Recognit. 2004, 37, 1987–1998. [Google Scholar] [CrossRef]
  20. Fei, L.; Lu, G.; Jia, W.; Teng, S.; Zhang, D. Feature Extraction Methods for Palmprint Recognition: A Survey and Evaluation. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 346–363. [Google Scholar] [CrossRef]
  21. Fei, L.; Zhang, B.; Zhang, W.; Teng, S. Local Apparent and Latent Direction Extraction for Palmprint Recognition. Inf. Sci. 2019, 473, 59–72. [Google Scholar] [CrossRef]
  22. Xiao, Q.; Lu, J.; Jia, W.; Liu, X. Extracting Palmprint ROI from Whole Hand Image Using Straight Line Clusters. IEEE Access 2019, 7, 74327–74339. [Google Scholar] [CrossRef]
  23. Leng, L.; Liu, G.; Li, M.; Khan, M.K.; Al-Khouri, A.M. Logical Conjunction of Triple-Perpendicular-Directional Translation Residual for Contactless Palmprint Preprocessing. In Proceedings of the 11th International Conference on Information Technology: New Generations, Las Vegas, NV, USA, 7–9 April 2014; pp. 523–528. [Google Scholar]
  24. Liu, Y.; Kumar, A. Contactless Palmprint Identification Using Deeply Learned Residual Features. IEEE Trans. Biom. Behav. Identity Sci. 2020, 2, 172–181. [Google Scholar] [CrossRef]
  25. Ali, M.M.; Gaikwad, A.T. Multimodal Biometrics Enhancement Recognition System Based on Fusion of Fingerprint and Palmprint: A Review. Glob. J. Comput. Sci. Technol. 2016, 16, 13–26. [Google Scholar]
  26. Dai, J.; Feng, J.; Zhou, J. Robust and Efficient Ridge-Based Palmprint Matching. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1618–1632. [Google Scholar]
  27. Benzaoui, A.; Khaldi, Y.; Bouaouina, R.; Amrouni, N.; Alshazly, H.; Ouahabi, A. A Comprehensive Survey on Ear Recognition: Databases, Approaches, Comparative Analysis, and Open Challenges. Neurocomputing. Neurocomputing 2023, 537, 236–270. [Google Scholar] [CrossRef]
  28. Zhang, D.; Guo, Z.; Lu, G.; Zhang, L.; Zuo, W. An Online System of Multispectral Palmprint Verification. IEEE Trans. Instrum. Meas. 2009, 59, 480–490. [Google Scholar] [CrossRef]
  29. Tamrakar, D.; Khanna, P. Occlusion Invariant Palmprint Recognition with ULBP Histograms. Procedia Comput. Sci. 2015, 54, 491–500. [Google Scholar] [CrossRef]
  30. Chai, T.; Prasad, S.; Wang, S. Boosting Palmprint Identification with Gender Information Using DeepNet. Future Gener. Comput. Syst. 2019, 99, 41–53. [Google Scholar] [CrossRef]
  31. Zhao, S.; Zhang, B. Learning Salient and Discriminative Descriptor for Palmprint Feature Extraction and Identification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5219–5230. [Google Scholar] [CrossRef]
  32. COEP Palmprint Database. Available online: https://www.coep.org.in/resources/coeppalmprintdatabase (accessed on 8 November 2023).
  33. Sun, Z.; Tan, T.; Wang, Y.; Li, S.Z. Ordinal Palmprint Representation for Personal Identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005. [Google Scholar]
  34. Hao, Y.; Sun, Z.; Tan, T. Comparative Studies on Multispectral Palm Image Fusion for Biometrics. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2007; pp. 12–21. [Google Scholar]
  35. Kumar, A. Incorporating Cohort Information for Reliable Palmprint Authentication. In Proceedings of the 6th Indian Conference on Computer Vision, Graphics & Image Processing, Bhubaneswar, India, 16–19 December 2008; pp. 583–590. [Google Scholar]
  36. GPDS Palmprint Image Database. Available online: https://gpds.ulpgc.es/ (accessed on 8 November 2023).
  37. Jia, W.; Hu, R.X.; Gui, J.; Zhao, Y.; Ren, X.M. Palmprint Recognition across Different Devices. Sensors 2012, 12, 7938–7964. [Google Scholar] [CrossRef]
  38. Charfi, N.; Trichili, H.; Alimi, A.M.; Solaiman, B. Local Invariant Representation for Multi-Instance Touchless Palmprint Identification. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 003522–003527. [Google Scholar]
  39. Zhang, Y.; Zhang, L.; Zhang, R.; Li, S.; Li, J.; Huang, F. Towards Palmprint Verification on Smartphones. arXiv 2020, arXiv:2003.13266. [Google Scholar]
  40. Shao, H.; Zhong, D.; Du, X. Deep Distillation Hashing for Unconstrained Palmprint Recognition. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  41. Shen, L.; Zhang, Y.; Zhao, K.; Zhang, R.; Shen, W. Distribution Alignment for Cross-Device Palmprint Recognition. Pattern Recognit. 2022, 132, 108942. [Google Scholar] [CrossRef]
  42. Hassanat, A.; Al-Awadi, M.; Btoush, E.; Al-Btoush, A.; Alhasanat, E.A.; Altarawneh, G. New Mobile Phone and Webcam Hand Images Databases for Personal Authentication and Identification. Procedia Manuf. 2015, 3, 4060–4067. [Google Scholar] [CrossRef]
  43. Li, W.; Zhang, D.; Xu, Z. Palmprint Identification by Fourier Transform. Int. J. Pattern Recognit. Artif. Intell. 2002, 16, 417–432. [Google Scholar] [CrossRef]
  44. Jia, W.; Ling, B.; Chau, K.W.; Heutte, L. Palmprint Identification Using Restricted Fusion. Appl. Math. Comput. 2008, 205, 927–934. [Google Scholar] [CrossRef]
  45. Jia, W.; Hu, R.X.; Lei, Y.K.; Zhao, Y.; Gui, J. Histogram of Oriented Lines for Palmprint Recognition. IEEE Trans. Syst. Man Cybern. Syst. 2013, 44, 385–395. [Google Scholar] [CrossRef]
  46. Luo, Y.T.; Zhao, L.Y.; Zhang, B.; Jia, W.; Xue, F.; Lu, J.T.; Xu, B.Q. Local Line Directional Pattern for Palmprint Recognition. Pattern Recognit. 2016, 50, 26–44. [Google Scholar] [CrossRef]
  47. Mokni, R.; Drira, H.; Kherallah, M. Combining Shape Analysis and Texture Pattern for Palmprint Identification. Multimed. Tools Appl. 2017, 76, 23981–24008. [Google Scholar] [CrossRef]
  48. Gumaei, A.; Sammouda, R.; Al-Salman, A.M.; Alsanad, A. An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images. Sensors 2018, 18, 1575. [Google Scholar] [CrossRef]
  49. Zhou, K.; Zhou, X.; Yu, L.; Shen, L.; Yu, S. Double Biologically Inspired Transform Network for Robust Palmprint Recognition. Neurocomputing 2019, 337, 24–45. [Google Scholar] [CrossRef]
  50. Wu, X.; Zhang, D.; Wang, K. Fisherpals Based Palmprint Recognition. Pattern Recognit. Lett. 2003, 24, 2829–2838. [Google Scholar] [CrossRef]
  51. Hu, D.; Feng, G.; Zhou, Z. Two-Dimensional Locality Preserving Projections (2DLPP) with Its Application to Palmprint Recognition. Pattern Recognit. 2007, 40, 339–342. [Google Scholar] [CrossRef]
  52. Pan, X.; Ruan, Q.Q. Palmprint Recognition with Improved Two-Dimensional Locality Preserving Projections. Image Vis. Comput. 2008, 26, 1261–1268. [Google Scholar] [CrossRef]
  53. Lu, J.; Tan, Y.P. Improved Discriminant Locality Preserving Projections for Face and Palmprint Recognition. Neurocomputing 2011, 74, 3760–3767. [Google Scholar] [CrossRef]
  54. Rida, I.; Al-Maadeed, S.; Mahmood, A.; Bouridane, A.; Bakshi, S. Palmprint Identification Using an Ensemble of Sparse Representations. IEEE Access 2018, 6, 3241–3248. [Google Scholar] [CrossRef]
  55. Rida, I.; Herault, R.; Marcialis, G.L.; Gasso, G. Palmprint Recognition with an Efficient Data Driven Ensemble Classifier. Pattern Recognit. Lett. 2019, 126, 21–30. [Google Scholar] [CrossRef]
  56. Wan, M.; Chen, X.; Zhan, T.; Xu, C.; Yang, G.; Zhou, H. Sparse Fuzzy Two-Dimensional Discriminant Local Preserving Projection (SF2DDLPP) for Robust Image Feature Extraction. Inf. Sci. 2021, 563, 1–15. [Google Scholar] [CrossRef]
  57. Zhao, S.; Wu, J.; Fei, L.; Zhang, B.; Zhao, P. Double-Cohesion Learning Based Multiview and Discriminant Palmprint Recognition. Inf. Fusion 2022, 83, 96–109. [Google Scholar] [CrossRef]
  58. Wan, M.; Chen, X.; Zhan, T.; Yang, G.; Tan, H.; Zheng, H. Low-Rank 2D Local Discriminant Graph Embedding for Robust Image Feature Extraction. Pattern Recognit. 2023, 133, 109034. [Google Scholar] [CrossRef]
  59. Kumar, A.; Shen, H.C. Palmprint Identification Using Palmcodes. In Proceedings of the 3rd International Conference on Image and Graphics (ICIG’04), Hong Kong, China, 18–20 December 2004; pp. 258–261. [Google Scholar]
  60. Kong, A.; Zhang, D.; Kamel, M. Palmprint Identification Using Feature-Level Fusion. Pattern Recognit. 2006, 39, 478–487. [Google Scholar] [CrossRef]
  61. Mansoor, A.B.; Masood, H.; Mumtaz, M.; Khan, S.A. A Feature Level Multimodal Approach for Palmprint Identification Using Directional Subband Energies. J. Netw. Comput. Appl. 2011, 34, 159–171. [Google Scholar] [CrossRef]
  62. Zhang, L.; Li, H.; Niu, J. Fragile Bits in Palmprint Recognition. IEEE Signal Process. Lett. 2012, 19, 663–666. [Google Scholar] [CrossRef]
  63. Zhang, S.; Gu, X. Palmprint Recognition Method Based on Score Level Fusion. Optik-Int. J. Light Electron Opt. 2013, 124, 3340–3344. [Google Scholar] [CrossRef]
  64. Li, H.; Zhang, J.; Wang, L. Robust Palmprint Identification Based on Directional Representations and Compressed Sensing. Multimed. Tools Appl. 2014, 70, 2331–2345. [Google Scholar] [CrossRef]
  65. Fei, L.; Xu, Y.; Tang, W.; Zhang, D. Double-Orientation Code and Nonlinear Matching Scheme for Palmprint Recognition. Pattern Recognit. 2016, 49, 89–101. [Google Scholar] [CrossRef]
  66. Xu, Y.; Fei, L.; Wen, J.; Zhang, D. Discriminative and Robust Competitive Code for Palmprint Recognition. IEEE Trans. Syst. Man Cybern. Syst. 2016, 48, 232–241. [Google Scholar] [CrossRef]
  67. Almaghtuf, J.; Khelifi, F.; Bouridane, A. Fast and Efficient Difference of Block Means Code for Palmprint Recognition. Mach. Vis. Appl. 2020, 31, 1–10. [Google Scholar] [CrossRef]
  68. Liang, L.; Chen, T.; Fei, L. Orientation Space Code and Multi-Feature Two-Phase Sparse Representation for Palmprint Recognition. Int. J. Mach. Learn. Cybern. 2020, 11, 1453–1461. [Google Scholar] [CrossRef]
  69. Hammami, M.; Ben Jemaa, S.; Ben-Abdallah, H. Selection of Discriminative Sub-Regions for Palmprint Recognition. Multimed. Tools Appl. 2014, 68, 1023–1050. [Google Scholar] [CrossRef]
  70. Raghavendra, R.; Busch, C. Texture Based Features for Robust Palmprint Recognition: A Comparative Study. EURASIP J. Inf. Secur. 2015, 2015, 5. [Google Scholar] [CrossRef]
  71. Tamrakar, D.; Khanna, P. Kernel Discriminant Analysis of Block-Wise Gaussian Derivative Phase Pattern Histogram for Palmprint Recognition. J. Vis. Commun. Image Represent. 2016, 40, 432–448. [Google Scholar] [CrossRef]
  72. Doghmane, H.; Bourouba, H.; Messaoudi, K.; Bouridane, A. Palmprint Recognition Based on Discriminant Multiscale Representation. J. Electron. Imaging 2018, 27, 053032. [Google Scholar] [CrossRef]
  73. Zhang, S.; Wang, H.; Huang, W.; Zhang, C. Combining Modified LBP and Weighted SRC for Palmprint Recognition. Signal Image Video Process. 2018, 12, 1035–1042. [Google Scholar] [CrossRef]
  74. El-Tarhouni, W.; Boubchir, L.; Elbendak, M.; Bouridane, A. Multispectral Palmprint Recognition Using Pascal Coefficients-Based LBP and PHOG Descriptors with Random Sampling. Neural Comput. Appl. 2019, 31, 593–603. [Google Scholar] [CrossRef]
  75. Attallah, B.; Serir, A.; Chahir, Y. Feature Extraction in Palmprint Recognition Using Spiral of Moment Skewness and Kurtosis Algorithm. Pattern Anal. Appl. 2019, 22, 1197–1205. [Google Scholar] [CrossRef]
  76. Chaudhary, G.; Srivastava, S. A Robust 2D-Cochlear Transform-Based Palmprint Recognition. Soft Comput. 2020, 24, 2311–2328. [Google Scholar] [CrossRef]
  77. Zhang, S.; Wang, H.; Huang, W. Palmprint Identification Combining Hierarchical Multi-Scale Complete LBP and Weighted SRC. Soft Comput. 2020, 24, 4041–4053. [Google Scholar] [CrossRef]
  78. Amrouni, N.; Benzaoui, A.; Bouaouina, R.; Khaldi, Y.; Adjabi, I.; Bouglimina, O. Contactless Palmprint Recognition Using Binarized Statistical Image Features-Based Multiresolution Analysis. Sensors 2022, 22, 9814. [Google Scholar] [CrossRef]
  79. Harrou, F.; Dairi, A.; Zeroual, A.; Sun, Y. Forecasting of Bicycle and Pedestrian Traffic Using Flexible and Efficient Hybrid Deep Learning Approach. Appl. Sci. 2022, 12, 4482. [Google Scholar] [CrossRef]
  80. Zeroual, A.; Harrou, F.; Dairi, A.; Sun, Y. Deep Learning Methods for Forecasting COVID-19 Time-Series Data: A Comparative Study. Chaos Solitons Fractals 2020, 140, 110121. [Google Scholar] [CrossRef]
  81. Izadpanahkakhk, M.; Razavi, S.M.; Taghipour-Gorjikolaie, M.; Zahiri, S.H.; Uncini, A. Deep Region of Interest and Feature Extraction Models for Palmprint Verification Using Convolutional Neural Networks Transfer Learning. Appl. Sci. 2018, 8, 1210. [Google Scholar] [CrossRef]
  82. Genovese, A.; Piuri, V.; Plataniotis, K.N.; Scotti, F. PalmNet: Gabor-PCA Convolutional Networks for Touchless Palmprint Recognition. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3160–3174. [Google Scholar] [CrossRef]
  83. Zhao, S.; Zhang, B. Deep Discriminative Representation for Generic Palmprint Recognition. Pattern Recognit. 2020, 98, 107071. [Google Scholar] [CrossRef]
  84. Liu, C.; Zhong, D.; Shao, H. Few-Shot Palmprint Recognition Based on Similarity Metric Hashing Network. Neurocomputing 2021, 456, 540–549. [Google Scholar] [CrossRef]
  85. Shao, H.; Zhong, D. Towards Open-Set Touchless Palmprint Recognition via Weight-Based Meta Metric Learning. Pattern Recognit. 2022, 121, 108247. [Google Scholar] [CrossRef]
  86. Türk, Ö.; Çalışkan, A.; Acar, E.; Ergen, B. Palmprint Recognition System Based on Deep Region of Interest Features with the Aid of Hybrid Approach. Signal Image Video Process. 2023, 17, 3837–3845. [Google Scholar] [CrossRef]
  87. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative Adversarial Networks: An Overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
  88. Yi, X.; Walia, E.; Babyn, P. Generative Adversarial Network in Medical Imaging: A Review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef]
  89. Gui, J.; Sun, Z.; Wen, Y.; Tao, D.; Ye, J. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. IEEE Trans. Knowl. Data Eng. 2021, 35, 3313–3332. [Google Scholar] [CrossRef]
  90. Khaldi, Y.; Benzaoui, A. A New Framework for Grayscale Ear Images Recognition Using Generative Adversarial Networks under Unconstrained Conditions. Evol. Syst. 2021, 12, 923–934. [Google Scholar] [CrossRef]
  91. Hassan, B.; Izquierdo, E.; Piatrik, T. Soft Biometrics: A Survey—Benchmark Analysis, Open Challenges, and Recommendations. Multimed. Tools Appl. 2021, 1–44. [Google Scholar] [CrossRef]
  92. Alonso-Fernandez, F.; Diaz, K.H.; Ramis, S.; Perales, F.J.; Bigun, J. Soft-Biometrics Estimation in the Era of Facial Masks. In Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 16–18 September 2020; pp. 1–6. [Google Scholar]
  93. Nixon, M.S.; Correia, P.L.; Nasrollahi, K.; Moeslund, T.B.; Hadid, A.; Tistarelli, M. On Soft Biometrics. Pattern Recognit. Lett. 2015, 68, 218–230. [Google Scholar] [CrossRef]
  94. Zhang, H.; Beveridge, J.R.; Draper, B.A.; Phillips, P.J. On the Effectiveness of Soft Biometrics for Increasing Face Verification Rates. Comput. Vis. Image Underst. 2015, 137, 50–62. [Google Scholar] [CrossRef]
  95. Jia, W.; Gao, J.; Xia, W.; Zhao, Y.; Min, H.; Lu, J.T. A Performance Evaluation of Classic Convolutional Neural Networks for 2D and 3D Palmprint and Palm Vein Recognition. Int. J. Autom. Comput. 2021, 18, 18–44. [Google Scholar] [CrossRef]
  96. Fei, L.; Zhang, B.; Xu, Y.; Jia, W.; Wen, J.; Wu, J. Precision Direction and Compact Surface Type Representation for 3D Palmprint Identification. Pattern Recognit. 2019, 87, 237–247. [Google Scholar] [CrossRef]
  97. Chaa, M.; Akhtar, Z.; Attia, A. 3D Palmprint Recognition Using Unsupervised Convolutional Deep Learning Network and SVM Classifier. IET Image Process. 2019, 13, 736–745. [Google Scholar] [CrossRef]
  98. Bai, X.; Meng, Z.; Gao, N.; Zhang, Z.; Zhang, D. 3D Palmprint Identification Using Blocked Histogram and Improved Sparse Representation-Based Classifier. Neural Comput. Appl. 2020, 32, 12547–12560. [Google Scholar] [CrossRef]
  99. Bhilare, S.; Kanhangad, V.; Chaudhari, N. A Study on Vulnerability and Presentation Attack Detection in Palmprint Verification System. Pattern Anal. Appl. 2018, 21, 769–782. [Google Scholar] [CrossRef]
  100. Li, X.; Bu, W.; Wu, X. Palmprint Liveness Detection by Combining Binarized Statistical Image Features and Image Quality Assessment. In Proceedings of the Biometric Recognition: 10th Chinese Conference, CCBR 2015, Tianjin, China, 13–15 November 2015; pp. 275–283. [Google Scholar]
  101. Farmanbar, M.; Toygar, Ö. Spoof Detection on Face and Palmprint Biometrics. Signal Image Video Process. 2017, 11, 1253–1260. [Google Scholar] [CrossRef]
  102. Aydoğdu, Ö.; Sadreddini, Z.; Ekinci, M. A Study on Liveness Analysis for Palmprint Recognition System. In Proceedings of the 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018; pp. 1–4. [Google Scholar]
Figure 1. Palmprint characteristics (a) at low resolution and (b) at high resolution.
Figure 1. Palmprint characteristics (a) at low resolution and (b) at high resolution.
Applsci 14 00153 g001
Figure 2. Architectural framework of a palmprint recognition system.
Figure 2. Architectural framework of a palmprint recognition system.
Applsci 14 00153 g002
Figure 3. Two characteristic approaches for palmprint image acquisition: (a) physical contact palmprint image capturing method and (b) contactless palmprint image capturing model.
Figure 3. Two characteristic approaches for palmprint image acquisition: (a) physical contact palmprint image capturing method and (b) contactless palmprint image capturing model.
Applsci 14 00153 g003
Figure 4. Example of preprocessing steps for a palmprint image. (a) image acquisition, (b) palm contour extraction, (c) key point detection, (d) establishment of a coordinate system, and (e) extraction of central parts.
Figure 4. Example of preprocessing steps for a palmprint image. (a) image acquisition, (b) palm contour extraction, (c) key point detection, (d) establishment of a coordinate system, and (e) extraction of central parts.
Applsci 14 00153 g004
Figure 5. Comparison between (a) a contact and (b) a non-contact capturing method used in biometric palmprint systems.
Figure 5. Comparison between (a) a contact and (b) a non-contact capturing method used in biometric palmprint systems.
Applsci 14 00153 g005
Figure 6. Proposed taxonomy for palmprint datasets: a comprehensive classification and overview of widely utilized datasets.
Figure 6. Proposed taxonomy for palmprint datasets: a comprehensive classification and overview of widely utilized datasets.
Applsci 14 00153 g006
Figure 7. Illustrative instances from the aforementioned contact-based palmprint datasets.
Figure 7. Illustrative instances from the aforementioned contact-based palmprint datasets.
Applsci 14 00153 g007
Figure 8. Illustrative instances from the aforementioned contactless-based palmprint datasets.
Figure 8. Illustrative instances from the aforementioned contactless-based palmprint datasets.
Applsci 14 00153 g008
Figure 9. Proposed taxonomy for palmprint feature extraction approaches.
Figure 9. Proposed taxonomy for palmprint feature extraction approaches.
Applsci 14 00153 g009
Table 1. Comparative summary of the contact-based palmprint databases (#Imgs: number of images, #ID: number of identities, #Imgs/ID: number of images per identity, Res: resolution, Norm: normalization, Var: variation, Acqui.Dev: acquisition device).
Table 1. Comparative summary of the contact-based palmprint databases (#Imgs: number of images, #ID: number of identities, #Imgs/ID: number of images per identity, Res: resolution, Norm: normalization, Var: variation, Acqui.Dev: acquisition device).
DatabaseYearDeveloper#Imgs#IDs#Imgs/IDRes.Norm.Var.Acqui. Dev.
PolyU [9]2003The Hong Kong Polytechnic University7752386≈20384 × 284NoNoScanner
PolyU-MS [28]2009The Hong Kong Polytechnic University6000 × 4 (red, green, blue, and NIR)500 × 412700 × 500YesNoScanner
IIITDMJ [29]2015Indian Institute of Information Technology, Design and Manufacturing, Jabalpur9001506700 × 500NoSmallScanner
BJTU_PalmV1 [30]2019Institute of Information Science, Beijing Jiaotong University 2431174From 8 to 101792 × 1200NoLarge2 CCD cameras
PV_790 [31]2020University of Macau518051810N/ANoNoScanner
COEP [32]N/ACollege of Engineering, Pune134416881600 × 1200NoNoScanner
Table 2. Comparative summary of the contactless-based palmprint databases (#Imgs: number of images, #ID: number of identities, #Imgs/ID: number of images per identity, Res: resolution, Norm: normalization, Var: variation, Acqui.Dev: acquisition device).
Table 2. Comparative summary of the contactless-based palmprint databases (#Imgs: number of images, #ID: number of identities, #Imgs/ID: number of images per identity, Res: resolution, Norm: normalization, Var: variation, Acqui.Dev: acquisition device).
DatabaseYearDeveloper#Imgs#IDs#Imgs/IDRes.Norm.Var.Acqui. Dev.
CASIA [33]2005Chinese Academy of Sciences, Institute of Automation 5502624≈8640 × 480NoSmallDigital camera
CASIA-MS [34]2007Chinese Academy of Sciences, Institute of Automation12002006700 × 500NoLargeDigital camera
IITD [35]2008Indian Institute of Technology, Delhi2601460≈6800 × 600YesSmallDigital camera
GPDS [36]2011Las Palmas de Gran Canaria University100010010800 × 600YesSmall2 webcams
Cross-Sensor [37]2012Chinese Academy of Science12,00020060816 × 612
778 × 581
816 × 612
NoLarge1 digital camera + 2 mobile phones
REST [38]2016Sfax University, Tunisia1945358≈52048 × 1536NoSmallDigital camera
TJI [14]2017Tongji University, China12,00060020800 × 600NoSmallDeveloped device
NTU-PI-v1 [18]2019Nanyang Technological University, Singapore77812035≈4From 30 × 30 to 1415 × 1415NoLargeFrom internet
NTU-CP-v1 [18]2019Nanyang Technological University, Singapore2478655≈4From 420 × 420 to 1977 × 1977NoLargeDigital camera
BJTU_PalmV1 [30]2019Institute of Information Science, Beijing Jiaotong University 2663296From 6 to 103264 × 2448NoLargeSeveral mobile phones
MPD [39]2020Tongji University, China16,00040040N/ANoLargeMobile phone
XJTU-UP [40]2021Xi’an Jiaotong University, China>20,000200≈100From 3264 × 2448 to 5312 × 2988YesLarge5 smartphone cameras
CrossDevice-A [41]2022Tencent Youtu Lab, China18,60031060N/ANoLargeMobile phone + IoT
CrossDevice-B [41]2022Tencent Youtu Lab, China600020030N/ANoLargeMobile phone + webcam
Table 3. A comparative overview of line-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
Table 3. A comparative overview of line-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
YearPaperMethodUsed DatasetsEvaluation ProtocolAcc. (%)
Name#IDs.#Imgs.
2002Li et al. [43]Fourier transformPrivate50030001 Img/sub (Rand) for training, remain 5 Imgs for testing95.48
2008Jia et al. [44]PL + LPPPolyU100600First session for training, second session for testing100.00
2014Jia et al. [45]HOLPolyU3867752First 3 Imgs/sub from 1st session for training, all images from 2nd session for testing99.97
PolyU-MS5006000100.00
2016Luo et al. [46]LLDPPolyU3867752First 3 Imgs from 1st session for training, second session for testing100.00
PolyU-MS5006000100.00
Cross-Sensor20012,00098.45
IITD4602601First Img/sub for training, remaining (4–6) for testing92.00
2017Mokni et al. [47]Principal lines + texture patternPolyU3867752All Imgs from 1st session for training, 5 Imgs (Rand) from second session for testing96.99
CASIA31255025 Imgs/sub for training, 3 Imgs/sub for testing (only right hands)98.00
IITD46023003 Imgs/sub for training, 2 Imgs/sub for testing97.98
2018Gumaei et al. [48]HOG-SGF + AEPolyU-MS5006000First 3 Imgs from 1st session for training, 6 Imgs from second session for testing99.22
CASIA61455026 Imgs/sub for training, remaining for testing (Rand)97.75
TJI60012,000First session for training, second session for testing98.85
2019Zhou et al. [49]DBITPolyU-MS50060005-fold cross-validation99.83
PolyU386775298.85
CASIA614550297.02
IITD460260194.79
COEP168134497.64
Table 4. A comparative overview of subspace learning-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
Table 4. A comparative overview of subspace learning-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
YearPaperMethodUsed DatasetsEvaluation ProtocolAcc. (%)
Name#IDs.#Imgs.
2003Wu et al. [50]FisherpalmsPrivate30030006 Imgs/sub (Rand) for training, remaining 4 Imgs/sub for testing99.75
2005Connie et al. [2]WFDAPrivate1509003 Imgs/sub for training, remaining 3 Imgs/sub for testing 98.64
2007Hu et al. [51]2DLPPPolyU100600First session for training, second session for testing84.67
2008Pan and Ruan [52]I2DLPPGPrivate-1404005 Imgs/sub for training (Rand), remaining for testing99.50
Private-234617305-fold cross-validation 95.77
2011Lu and Tan [53]W2D−DLPPPolyU1006004 Imgs/sub for training (Rand, 20 iterations), remaining for testing 94.90
2018Rida et al. [54]SRPolyU-MS50060004 Imgs/sub for training, remaining for testing (Rand, 10 iterations)99.24
PolyU374374099.87
2019Rida et al. [55]RSMPolyU3743740First 4 Imgs/sub for training, remaining for testing99.96
PolyU-MS500600099.15
Private400800094.50
2021Wan et al. [56]SF2DDLPPPolyU10060050% for training (Rand), 50% for testing95.18
2022Zhao et al. [57]DC_MDPRIITD46026014 Imgs/sub for training, remaining for testing (Rand, 10 iterations)98.43
GPDS100100098.93
CASIA624550099.11
TJI60012,00099.44
PV_790418418099.76
2023Wan et al. [58]LR-2DLDGEPolyU10060050% for training (Rand, 10 iterations), 50% for testing93.87
Table 5. A comparative overview of local direction encoding-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
Table 5. A comparative overview of local direction encoding-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
YearPaperMethodUsed DatasetsEvaluation ProtocolAcc. (%)
Name#IDs.#Imgs.
2004Kumar and Shen [59]PalmCodePrivate408001 Img/sub for training, remaining for testing (both palms)98.00
2006Kong et al. [60]Fusion CodePrivate4889599First session for training, second session for testing98.26
2011Mansoor et al. [61]CT + NSCTPolyU38677525 Imgs/sub for training, remaining 3 for testing88.91
GPDS505003 Imgs/sub for training, remaining 7 for testing98.20
2012Zhang et al. [62]E-BOCVPolyU3847752First session for training, second session for testingEER = 0.03
2013Zhang and Gu [63]Competitive Coding + TPTSRPolyU-MS5006000First 3 Imgs/sub for training, remaining 6 Imgs/sub for testing98.00
2014Li et al. [64]DR + PCAPolyU100600First 3 Imgs/sub for training, remaining for testing97.00
2016Fei et al. [65]DOCPolyU3743740First 2 Imgs/sub for training, remaining for testing99.27
PolyU-MS500600098.60
IITD460230078.70
2016Xu et al. [66]DRCCPolyU3867752First 2 Imgs/sub for training, remaining for testing98.79
PolyU-MS500600098.67
IITD460260081.38
2020Almaghtuf et al. [67]DBMPolyU3867752First 3 Imgs/sub for training, remaining 3 for testing100.00
PolyU-MS5006000100.00
IITD230322095.40
CASIA6245502First 2 Imgs/sub for training, remaining 2 for testing93.90
2020Liang et al. [68]MTPSRPolyU3867752First 4 Imgs/sub for training, remaining for testing99.58
GPDS100100096.83
IITD460260197.11
Table 6. A comparative overview of texture-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
Table 6. A comparative overview of texture-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
YearPaperMethodUsed DatasetsEvaluation ProtocolAcc. (%)
Name#IDs.#Imgs.
2014Hammami et al. [69]LBP + SFFSCASIA56445125 Imgs/sub for training, remaining 3 Imgs/sub for testing (Rand)97.53
PolyU3867752First session for training, second session for testing95.35
2015Raghavendra and Busch [70]B-BSIFPolyU3527040Fist 10 Imgs/sub for training, remaining 10 Imgs/sub for testingEER = 4.06
IITD47023504 Imgs/sub for training, remaining 1 Img/sub for testing (Rand, 10 iteration)EER = 0.42
PolyU-MS5006000First session for training, second session for testing EER = 0.00
2016Tamrakar and Khanna [71]BGDPPH + KDAPolyU40080004-fold cross-validation99.98
CASIA62453353-fold cross-validation99.22
IITD430240099.19
IIITDMJ150900100.00
PolyU-MS50060002-fold cross-validation100.00
CASIA-MS200120098.99
2018Doghmane et al. [72]DGLSPHPolyU3867752First 3 Imgs/sub for training, remaining for testing99.95
PolyU 2D/3D400800099.95
IITD460260199.57
2018Zhang et al. [73]WACS-LBP + WSRCPolyU3867752First 10 Imgs/sub for training, remaining 10 Imgs/sub for testing99.14
CASIA62455027 Imgs/sub for training, remaining for testing (50 Rand iterations)98.06
2019El-Tarhouni et al. [74]PCMLBP + PHOG PolyU5006000First session for training, second session for testing99.34
2019Attallah et al. [75]LBP + Spiral features + mRMRPolyU3527040First session for training, second session for testing≈97.00
IITD47023504 Imgs/sub for training, remaining for testing≈98.00
PolyU-MS5006000First session for training, second session for testing≈99.00
2020Chaudhary and Srivastava [76]2DCTIITD200120050% for training (first Imgs), 50% for testing97.85
CASIA312249698.66
PolyU386775299.80
2020Zhang et al. [77]HMS-CLBP + WSRCPolyU38677525-fold cross-validation (50 iterations)99.68
CASIA312533597.13
2022Amrouni et al. [78]BSIF + DWTIITD4602601First 3 Imgs/sub for training, remaining for testing98.77
CASIA6244992First 4 Imgs/sub for training, remaining for testing98.10
Table 7. A comparative overview of deep learning-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
Table 7. A comparative overview of deep learning-based approaches (#ID: number of identities, #Imgs: number of images, Acc: valid accuracy).
YearPaperMethodUsed DatasetsEvaluation ProtocolAcc. (%)
Name#IDs.#Imgs.
2018Izadpanahkakhk et al. [81]Fast CNNPolyU3867752First 4 Imgs/sub for training, remaining for testing99.40
Private354354098.30
IITD460260094.70
2019Matkowski et al. [18]EE-PRnetCASIA6185502First 4 Imgs/sub for training, remaining for testing97.65
IITD460260199.61
PolyU177177050% for training, 50% for testing (Rand)99.77
NTU-CP-v1655247895.34
NTU-PI-v12035788141.92
2019Chai et al. [30]PalmNet + GenderNet PolyU2D/3D177177050% for training, 50% for testing98.98
IITD460260199.01
TJI6001200099.50
CASIA620550299.18
BJTU_PalmV13472431100.00
BJTU_PalmV2296266395.16
2019Genovese et al. [82]PalmNet-GaborPCACASIA62454552-fold cross-validation, 5 permutations (Rand)99.77
IITD467266999.37
REST358193797.16
TJI600518299.83
2020Zhao and Zhang [83]DDRCASIA62455004 Imgs/sub for training, remaining for testing (Rand, 5 iterations)99.41
IITD460260098.70
PolyU-MS500600099.95
2020Liu and Kumar [24]RFNIITD23011505 left Imgs for training, 5 right Imgs for testing99.20
2021Liu et al. [84]SMHNetPolyU-MS50060005-way 1-shot recognition98.94
XJTU-UPN/AN/A89.73
TJI60012,00097.36
2022Shen et al. [41]ArcPalm-IR50 + PTDCASIA62455025-fold cross-validation (4/5 for training, 1/5 for testing) 99.85
IITD4602601100.00
PolyU1141140100.00
TJI60012,000100.00
MPD40016,00099.78
CrossDevice-A31018,60099.19
CrossDevice-B200600071.20
2022Shao and Zhong [85]W2MLXJTU-UP200200050% for training (first half), 50% for testing91.63
TJI60012,00093.39
MPD40016,00071.74
IITD460260094.02
2023Türk et al. [86]CNN deep features + SVMPolyU3867752Monte Carlo cross-validation (5 iterations)99.72
Table 8. A comparative summary among various palmprint recognition approaches.
Table 8. A comparative summary among various palmprint recognition approaches.
StrengthsWeakness
Line-based methodsThese methods demonstrate the ability to acquire data-driven representations and exhibit strong discriminative abilities.It is essential that the training data represent the classes under consideration comprehensively and accurately.
Subspace learning-based methodsHigh descriptive capabilities coupled with a low computational cost characterize the efficiency of these methods.The sensitivity to the size of the subspace is a significant challenge that requires careful consideration in its application.
Local direction encoding-based methodsThese methods have a high discriminatory capability and stable characteristics.They require significant computing resources and contribute to high computational costs.
Texture based-methodsCharacteristics can be extracted from lower-resolution images, and these techniques exhibit consistent and stable characteristics.They are highly susceptible to noise interference.
Deep learning-based methodsThese methods demonstrate exceptional robustness when applied to large and complex datasets.They require a significant amount of training data and incur high computational costs.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Amrouni, N.; Benzaoui, A.; Zeroual, A. Palmprint Recognition: Extensive Exploration of Databases, Methodologies, Comparative Assessment, and Future Directions. Appl. Sci. 2024, 14, 153. https://doi.org/10.3390/app14010153

AMA Style

Amrouni N, Benzaoui A, Zeroual A. Palmprint Recognition: Extensive Exploration of Databases, Methodologies, Comparative Assessment, and Future Directions. Applied Sciences. 2024; 14(1):153. https://doi.org/10.3390/app14010153

Chicago/Turabian Style

Amrouni, Nadia, Amir Benzaoui, and Abdelhafid Zeroual. 2024. "Palmprint Recognition: Extensive Exploration of Databases, Methodologies, Comparative Assessment, and Future Directions" Applied Sciences 14, no. 1: 153. https://doi.org/10.3390/app14010153

APA Style

Amrouni, N., Benzaoui, A., & Zeroual, A. (2024). Palmprint Recognition: Extensive Exploration of Databases, Methodologies, Comparative Assessment, and Future Directions. Applied Sciences, 14(1), 153. https://doi.org/10.3390/app14010153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop