Next Article in Journal
Multi de Bruijn Sequences and the Cross-Join Method
Next Article in Special Issue
Optimization Model and Algorithm of Logistics Vehicle Routing Problem under Major Emergency
Previous Article in Journal
Evaluating the Privacy and Utility of Time-Series Data Perturbation Algorithms
Previous Article in Special Issue
Intelligent Prediction of Customer Churn with a Fused Attentional Deep Learning Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Multiview-Learning-Based Generic Palmprint Recognition: A Literature Review

1
School of Computer Science, Guangdong University of Technology, Guangzhou 510006, China
2
Shenzhen Key Laboratory of Visual Object Detection and Recognition, Harbin Institute of Technology, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(5), 1261; https://doi.org/10.3390/math11051261
Submission received: 17 January 2023 / Revised: 23 February 2023 / Accepted: 3 March 2023 / Published: 6 March 2023

Abstract

:
Palmprint recognition has been widely applied to security authentication due to its rich characteristics, i.e., local direction, wrinkle, and texture. However, different types of palmprint images captured from different application scenarios usually contain a variety of dominant features. Specifically, the palmprint recognition performance will be degraded by the interference factors, i.e., noise, rotations, and shadows, while palmprint images are acquired in the open-set environments. Seeking to handle the long-standing interference information in the images, multiview palmprint feature learning has been proposed to enhance the feature expression by exploiting multiple characteristics from diverse views. In this paper, we first introduced six types of palmprint representation methods published from 2004 to 2022, which described the characteristics of palmprints from a single view. Afterward, a number of multiview-learning-based palmprint recognition methods (2004–2022) were listed, which discussed how to achieve better recognition performances by adopting different complementary types of features from multiple views. To date, there is no work to summarize the multiview fusion for different types of palmprint features. In this paper, the aims, frameworks, and related methods of multiview palmprint representation will be summarized in detail.
MSC:
68P05; 68P30; 68Q25; 68Q32

1. Introduction

In recent years, palmprint recognition has become a fascinating research area in security authentication. Compared with the other biometric traits, i.e., iris, face, and fingerprint, palmprint has more excellent properties of rich characteristics, user convenience, and hygiene [1,2,3]. For example, Genovese et al. [4] designed a PalmNet network for deep palmprint representaion. Zhang et al. proposed a CR_CompCode method for contactless palmprint identification by combining sparse representation and CompCode encoding [5]. However, when palmprint images are acquired in open-set environments, the palmprint recognition performance will be degraded by a variety of interferences [1,5]. Open-set palmprint recognition indicates the unknown application scenarios that we may encounter in diverse recognition environments, i.e., contact-based, contactless, near-infrared, enclosed, or open spaces, databases with multi-types of noises. Seeking to handle the long-standing interference factors, i.e., noise, rotation, shadow, in palmprint images, multi-view palmprint feature learning has been proposed to enhance the feature expression by exploiting multiple characteristics from diverse views [6,7,8]. Up to now, there is no work to summarize the multiview fusion for different types of palmprint features. In this paper, we will introduce the aims, frameworks, and basic theories of multiview palmprint representation methods to summarize why they can attain better performances by adopting different complementary types of features from multiple views.
Generally, palmprint recognition includes three main steps, i.e., region of interest (ROI) extraction, feature representation, and verification or identification [9,10,11]. Since the background of the palm contains interference information in the image (see Figure 1), we usually detect the sub-region of the palm center from the image as the ROI [12]. Afterward, a variety of types of features can be extracted from the ROI, which directly determines the discriminant property of the feature representation. Finally, we can select an effective classifier for verification or identification using the features obtained in the second step. Specifically, feature representation can significantly affect the final recognition performance. According to the feature extraction approaches, feature types, and feature expression manners, existing palmprint representation methods can be divided into six categories: (1) Texture based. For example, Kumar [13] proposed performing personal authentication by adopting the hand shape and texture characteristics. Yonuesi et al. [14] extracted the texture features using Gabor filters for palmprint recognition. (2) Line based. For instance, Huang et al. [15] detected principle lines from the palmprint image for palmprint recognition. Guo et al. [16] further proposed a binary orientation co-occurrence vector for palmprint verification. Furthermore, Jia et al. [17] encoded the complete direction patterns for palmprint recognition. (3) Subspace learning based. For example, Fei et al. [18] integrated the latent sub-space learning and the principal line distance measurement into a joint learning for contactless palmprint recognition. Afterward, Zhao et al. [19] directly learned the robust and salient features from the pixels of the palmprint image by learning a latent binary sub-space. (4) Correlation filter based. For example, Jia et al. [17] utilized a set of correlation filters to produce a sharp peak when filtering a palmprint image. (5) Local descriptor based. For example, Li et al. [20] presented a local micro-structure tetra pattern descriptor for palmprint feature extraction. (6) CNN-based. For instance, Zhao et al. [9] proposed to produce the discriminative deep features for open-set palmprint recognition by adopting a deep neural convolution network. Jia et al. [21] designed an EEPNet network for palmprint-efficient palmprint identification. Nevertheless, most of the above palmprint representation methods focused on a special application scenario, which limited the popularization of palmprint recognition in more complex application environments. Therefore, multiview palmprint representaion has been proposed to enhance the feature expression by exploiting multiple characteristics from diverse views [22].
In the generic palmprint recognition scenarios, there exist different categories of palmprint images, such as contactless, contact-based, hyperspectral, and high-resolution palmprint images (see Figure 2). Recently, multiview learning (MVL) has attracted significant consideration from researchers, since the utilization of the heterogeneous features from multiple views has enormous potential for better recognition performance [24,25,26]. Compared with single-view representation, multiview learning can typically leverage more characteristics and structural information hidden in the data to improve the learning performance [27,28]. For example, Yang et al. [29] fused the multiple palmprint features in a latent low-dimensionality sub-space. Tao et al. [30] learned a classifier that fused multiview features in the Hilbert space to maximize the agreement among views. In [31], a multiview nearest subspace classifier was proposed for image classification. Besides this, Gupta et al. [32] adopted the multiview ensemble learning method for the binary classification on the brain–computer interface domain.
As it is known, a palmprint image usually contains different types of features, such as textures, local orientations, wrinkles, and principal lines. Nevertheless, these features cannot be clearly captured, but the palmprint image is acquired under the non-constraint and open application environments. In this way, we can hardly obtain the satisfactory recognition accuracy. Inspired by the success of MVL, some researchers proposed to utilize multiple palmprint features for palmprint recognition; in such a manner, the characteristics from multiple views can be comprehensively adopted to enhance the final feature representation. For instance, Lin et al. [33] extracted multiview deep features for 3D and contactless palmprint recognition. In [34], Zhao et al. projected multiview features into the same sub-space, where a structure suture learning strategy was used to measure the learning performance. Meanwhile, a low-rank reconstruction term was introduced into the learning model to achieve the robust and intrinsic structure of the data. In [22], Fei et al. learned the multiview and discriminant features for palmprint images, which formed a multiview for the handprint image that includes a feature container of texture and orientation view. Specifically, the multiview features were encoded by reducing the intra-class distances and maximizing the inter-class distances. Besides this, Liang et al. [35] proposed to acquire two views of a palmprint using a dual-camera to capture the visible light and infrared light palm regions. In conclusion, MVL-based palmprint recognition can achieve better performances on the open-set palmprint databases by enhancing the final feature representation.
Up to now, some reviews have analyzed the development of palmprint recognition. For example, ref. [3] reviewed the development of contactless palmprint recognition, including ROI extraction, feature extraction, and matching. The authors of [10] presented a simple summation on palmprint database acquisition, data preprocessing, and feature representation. Furthermore, ref. [11] summarized the development of contactless fingerprint recognition using deep-learning-based technologies. Compared with the previous literature reviews on palmprint authentication, this paper provides a more comprehensive summary of multiview palmprint recognition, including different types of single-view features, open-set palmprint databases, ROI extraction, multiview feature containers, and multiview learning for feature fusion. Besides this, the basic theories that motivate multiview palmprint recognition, i.e., linear regression theory, multiview subspace learning, and sparse representation, are introduced to analyze the reasons for the achievement of better performances. This study’s main contributes are as follows:
(1)
We completely introduce different types of open-set palmprint databases in real-world application scenarios.
(2)
To the best of our knowledge, this study is the first work to create a detailed and comprehensive summary and analysis for multiview palmprint recognition methods.
In the rest of this article, Section 2 presents different types of palmprint databases and the approaches of ROI extraction. Section 3 provides a comprehensive introduction to some multiview feature containers and multiview feature learning models. In Section 4, the conclusion of this article is summarized.

2. Generic Palmprint Databases and Image Preprocessing

2.1. Palmprint Dataset Construction

In the last decade, palmprint recognition has been widely adopted to a variety of security authentication scenarios. Considering the application environments, the existing palmprint databases can be divided into three categories, i.e., contact-based, contactless, and palm–vein. Contact-based palmprint databases indicate that the hand is fixed when the palmprint image is acquired. Conversely, the palmprint image is collected without constraint on the hand, while the contactless palmprint database is constructed. Specifically, we can obtain the palm–vein image under the near-infrared environment. In this seb-section, we introduce some typical palmprint databases, i.e., CASIA [36], Digital Signal Processing Group palmprint dataset (GPDS) [37], Tongji palmprint dataset (TJI) [5], IIT Delhi (IITD) [38], NTU Forensic Image database [23], REST [39], PolyU palmprint database (PolyU) [40], M_NIR [41], and Palm-Vein 790 dataset (PV_790). We detail these palmprint databases in Table 1.
Specifically, the CASIA, TJI, GPDS, NTU, REST, and IITD databases were collected with no constraints, where the recognition is more difficult than the contact-based databases, i.e., PolyU, PV_790, and M_NIR. Otherwise, since the PV_790 and M_NIR databases were acquired under near-infrared environment, the images in these two databases reflect the palm–vein networks of capillaries. Since PV_790 was captured under the near-infrared environment, the image quality is often influenced by having low illumination, Gaussian noise, and low resolution. Figure 3 shows the brief overview of the acquirement for different types palmprint databases. Besides this, Figure 4 shows some palmprint samples from these databases.
The CASIA database includes 5500 samples collected from 312 subjects including the left and the right hands, in which each category comprises 8–17 palmprint images. GPDS contains 1000 samples captured from the right palm from 100 persons, where each class includes 10 images. The TJI database totally contains 12,000 images acquired from both the right and left palms of 300 persons, where two sessions were constructed with intervals of two months and each category comprises 10 samples. The IITD dataset comprises 2601 palmprint samples from 460 classes captured from 230 persons, where each class includes five or six samples. The NTU-PI-v1 database was established by collecting contactless palmprint images from the Internet, in which 7781 samples were acquired from 2035 hands of 1093 different subjects. The REST dataset was collected from 179 individuals consisting of 1945 samples in total, captured from 179 persons. The PolyU dataset was captured from 386 hands, with 7752 palmprint images captured in two sessions with intervals of one month. The M_NIR database was collected utilizing a contact-based camera under the near-infrared environment, where 6000 palm–vein images from 500 classes were acquired in two sessions. In addition, each class contains 12 images with 6 images for each session. The M_Blue database was collected utilizing a contact-based camera under the blue light, where 6000 palm–vein images from 500 classes were acquired in two sessions. Otherwise, each class contains 12 images with 6 images for each session. The PV_790 palm vein dataset was acquired under the near-infrared environment on the band of 790 nm, where 5180 images were captured from 518 subjects from 209 persons.

2.2. Palmprint ROI Segmentation

ROI extraction is always a necessary and important phase for palmprint representation in the whole recognition procedure, due to the fact that the ROI location will influence the feature extraction in the palmprint image. Since the interference information in the background will significantly reduce the recognition performance, the ROI is usually segmented from the original image before feature extraction [42,43]. Specifically, a center region containing the rich and stable characteristics is ususlly segmented from the palmprint image as an ROI [44]. In such a manner, the redundant information and the interference factors in the background can be removed, as well as the volume of the data [45]. Figure 5 depicts the brief process of ROI extraction from the palmprint image [40,46]. For different palmprint images, segmenting the ROI from the same position is a significant challenge to ensure the stability of palmprint representation, which provides a reliable recognition rate and fast processing speed [47]. Table 2 lists some typical palmprint ROI extraction methods. Figure 6 shows some ROI samples from different types of palmprint samples. In this sub-section, we introduce some classical palmprint ROI detection methods as follows.
In [56], Xiao et al. presented an ROI extraction method from a complete palm image by using linear clustering. The core thought for this study was that the fingers in the image could be detected using linear clusters due to their unique appearance. Firstly, the palm image was transformed into a binary image, where the authors drew many lines based on the pre-defined criteria. For a line, if the line intersects with the hand region at 8 points, it could conclude that the line goes through 4 fingers. Therefore, the finger location could be found. Here, the finger joint areas were detected by drawing a number of straight lines. Therefore, many core point candidates could be attained in the four-knuckle region. Then, the k-means clustering algorithm was used to calculate the 4 clustering centers and use them as the last 4 key points. In the proposed coordinate, the ROI could be detected after the rotation normalization.
In [57], the palm lines and palmar veins were first pretreated using the binarization and denoising techniques. Following that, the ROI was located according to the maximum tangent circle and the centroid methods. To evaluate the proposed method, this approach was validated on the PUT palm–vein dataset as well as the CASIA dataset. The experimental results illustrated that the proposed method was effective and feasible and effective.

3. Multiview Palmprint Recognition

3.1. Multiview Feature Containers

Generally, the palmprint images contain rich texture and orientation characteristics. In order to fully utilize the different types of palmprint features, a number of feature containers have been designed, i.e., an orientation-view container and DCNN-view container. Here, we detail the formation process of these two feature containers.
(1) Orientation-view container. The texture is a type of crucial characteristics for palmprint representation. A number of texture filters, i.e., 2-D Gabor, LBP, Gaussian, and LDP, have achieved an effective performance on the direction encoding of the palmprint. Specifically, LBP is powerful for texture feature representation. Besides this, since the 2D-Gabor filter is sensitive to the direction changes, it is usually adopted to present orientation characteristics in the biometrics images. Based on the previous methods, more local direction-based algorithms have been proposed in hand-print verification or identification. Figure 7 shows the brief pipeline of local orientation feature extraction from the palmprint ROI image.
(2) DCNN-view container. Deep convolutional neural networks (DCNN) have been widely adopted to present the high-impact characteristics for palmprint characteristics. For instance, in [4], Genovese et al. designed a PalmNet for palmprint representaion, which exploited a number of Gabor filter CNN to present the higher impact palmprint features. Michele et al. [58] explored the MobileNet V2 deep convolutional neural networks for palmprint representation, where the pretrained MobileNet is fine tuned. Besides this, Daas et al. [59] proposed a multi-modal biometrics system by fusing the features of the finger–vein and FKP. Furthermore, Liu et al. [60] designed a soft-shifted triplet loss function to learn deep residual palmprint characteristics for contactless palmprint representation.
We list more single-view palmprint representation methods in Table 3.

3.2. Multiview Palmprint Representation

In recent years, multiview feature learning (MFL) has become attractive; it has been applied to a variety of applications, i.e., image retrieval, webpage retrieval, and speech recognition. Since the features collected from multiple views contain more complementary information, MFL has the potential to gain a better performance than single-view feature representation. Most of the existing MFL-based methods are motivated by the conventional machine learning techniques [65,66], i.e., least square regression (LSR) [67], low-rank representation (LR-R) [68], graph learning [69], and sparse representation (SR) [70]. Specifically, LSR can project the original samples into a low-dimensionality label sub-space, low-rank representation can attain the global structure of the data, graph learning can achieve the correlations between the arbitrary each two samples of the data, and sparse representation can obtain the robust representation using a L p -norm regularization. Here, we list more related works in Table 4.
In order to improve the expression ability, many multiview classification methods were proposed [71,72]. For example, Zheng et al. [82] proposed a multiview low-rank LSR method to achieve the low-rank structure of the data. Following that, Tao et al. [30] formulated both of the cohesion and diversity information of different features for the multiview classifier learning. In [24], MMatch jointly learns view-specific representations and class probabilities of training data for discriminative representation learning. Up to now, a variety of MFL-based palmprint methods have achieved significant performances. Figure 8 shows the brief framework of multiview palmprint feature learning, where Table 5 lists some multiview palmprint representation methods. In Figure 8, one can see that different types of single-view features are usually first extracted to reflect the corresponding characteristics. Then, some machine learning technologies, LSR, graph learning, and RPCA, can be performed to obtain low-dimensionality and discriminative feature representation. Finally, we can conduct identification or verification using a classifier. Here, we detail some classical MFL-based palmprint recognition methods as follows.
In [27], a multiview discriminant palmprint recognition method based on dual cohesive learning is proposed. It introduced a neighborhood graph constraint and a cross-view consistency item in the projection matrix learning and integrates them into an optimization problem. In addition, this method presented the feature of multiple views in the low-dimensional subspace, which effectively reduces the computational complexity. In the comparative experiments, the proposed DC_MDPR was compared with a number of single-view representation methods on both the contactless and the contact-based databases, where the multiview-based proposed method can always outperform the single-view based methods.
In [34], a structure-suture-learning-based method was proposed, which could comprehensively utilize palmprint features from multiple perspectives. Different from existing related methods, this study introduced a structure suture learning strategy to achieve the potential consensus structures among different views. At the same time, a low-rank reconstruction term was implemented into the learning model to obtain the robust and intrinsic structure of the data. In particular, because there were no additional structure-introducing capture items in the model, the training time was significantly reduced. In the comparative experiments, the proposed method was compared with a number of single-view representation methods on both the different types of databases, i.e., contactless, contact based, palm–vein, and dorsal vein databases, where the multiview-based proposed method can always achieve the best performances.
In [22], a joint multi-view feature learning method for handprint recognition was proposed. Unlike most existing palmprint descriptors, which were hand-crafted and focused on single-view features, this method automatically learned the multiview and discriminant features for palmprint images. In particular, it first formed a multiview for the handprint image that included a feature container of the texture and orientation view. Then, it encoded multiview feature codes by reducing the intra-class distances and maximizing the inter-class distances.
In [83], Zhang et al. proposed a palmprint recognition system based on global strategy and local feature fusion. Select global features as Fourier features. The local feature selection is a Gabor feature. The palmprint features are added into blocks, and a group of new features placed in the same position are integrated serially by the local classifier. A second layer of the local feature classifier can be obtained. Besides this, Liang et al. [89] designed a direction–space coding strategy to encode the direction–space features of the palmprint. Moreover, it proposed a multi-feature two-phase sparse representation approach for the final matching.
In [84], the LBP, LDP, and DCNN were adopted to extract the features from each spectrum of the hyperspectral palmprint images, where three feature matrices were formed. Since each matrix included redundant information and big dimensionalities, a 2D-PCA was applied to reduce the dimensionality of each feature matrix to generate a unified feature vector. Finally, the collaborative residuals of each view were fused in the score level to decide the classification results.
In [90], two palmprint recognition fusion schemes were proposed. In the first fusion scheme, the main line of the test image was first extracted and matched with the main line of all training images. Secondly, a small training sub database was constructed by selecting the training images with large matching scores. Finally, in the small training sub-database, combining the main line matching score and local preserving projection features, the decision level fusion was carried out for final recognition.
In [85], a multi-type feature encoding scheme was proposed to jointly extract and encode the discriminate orientations and texture features from palmprint images. Particularly, it extracted the dominant direction to describe the orientation and texture features. Different from the previous works, it utilized the majority voting to simultaneously extract the directional features and texture features in local patches, such that the multiple types of the features could be more reliable.
In [86], a feature level fusion scheme was proposed. First, it tried to obtain the quality of the ROIs, which was followed by the application of the fractional difference masks to improve the textural details. Seeking to select the most recognizable palmprint feature, a feature transformation strategy algorithm inspired by sub-space learning was adopted. It could reduce the computing time and feature dimensionalities, as well as achieve better performances. The trained support vector machine was adopted to the final palmprint identification.
In [87], a high-resolution palmprint recognition algorithm was proposed, in which the matching performance of the traditional algorithm was significantly boosted by using the details, density, direction, and mainline features for palmprint recognition. Besides this, an adaptive orientation estimation scheme was designed, that outperformed the existing algorithms. Furthermore, a fusion scheme was utilized in recognition applications, and its performance was better than the traditional fusion approaches.
In [91], a multi-feature learning method, which used one training sample per palm, was proposed to encode the discriminative multi-feature for palmprint representation. Different from most existing manual methods of extracting single-type features from the original pixels, it learned to distinguish multiple features from multiple types of data vectors. Finally, it clustered the non-overlapping block histograms of compact multi-feature codes into a feature vector for feature representation. In addition, in [92], a palmprint identification method was proposed, which used the Gabor filters to encode multi-view palmprint features. Then, these features were fused into a feature vector at the feature level for final identification.
In [93], a layered multi-feature encoding strategy was designed to achieve satisfactory palmprint recognition performances on the large palmprint databases. Following this, in [94], low-cost scanners were used to capture hand biometric images. Then, the fusion of the matching score levels was performed by adopting the dynamic weighting rules. Besides this, in [95], a multi-color component fusion strategy was proposed for palmprint feature fusion, where the multiple features were fused into the sequence fusion feature vectors. Then, the nearest neighbor classifier was selected for final classification. Furthermore, in [96], different level palmprint features were fused into a multi-scale fusion feature vector.
In [97], the training sample was first divided into five patches, and each patch was decomposed of multi-wavelets. In addition, the segmented image was downsampled as a new sample. Furthermore, the softmax was adopted to perform palmprint identification. Besides this, in [98], a cooperative representation framework was proposed, which had l 1 norm or l 2 norm regularization for 3D palmprint recognition. In [99], a heuristic palmprint recognition method was proposed, which extracted three views of palmprint features. Additionaly, in [88], a multi-stream CNN fusion framework was designed for multiview palmprint representation.
In [100], a non-stationary feature fusion scheme was proposed, where a block-based DCT was adopted to produce the fused feature vector. Besides this, in [101], a feature mapping strategy was first designed from two deep architectures by using a weighted loss function. Afterward, a convolutional-based feature fusion block was proposed in the palmprint matching stage. In addition, Attallah et al. [102] fused the BSIF and LBP traits integrating with PCA for efficient multimodal handprint recognition. In [103], a BPFNet network was constructed to perform the feature fusion for palmprint recognition, with an end-to-end network from ROI alignment to image fusion. In [104], the feature fusion was performed at the score level by adopting feature encoding methods, i.e., Radon transform, TPLBP, FPLBP HOG, Ridgelet transform, Gabor filter, and DCT, where satisfactory performances could be attained when only one sample was set as the training data for each class.
Due to the fact that the previous 3D palmprint representation methods were encoded based on the hand-crafted design, they can no longer adapt to a dynamic environment. In [105], Fei et al. proposed a joint feature representation learning scheme to extract the compact curvature feature for 3D palmprint verification. This algorithm first formed multiple curvature feature vectors to completely encode the curvature characteristics of 3D palmprint samples. Then, it jointly learned a feature projection function that mapped the curvature feature vectors into the binary codes, which had the maximum inter-class variances for discriminative representation. Furthermore, it learned the collaborative binary features of the multiple curvature codes by minimizing the information loss between the final representation and the multiple curvature features; in such a manner, the proposed method could be more compact in feature representation.
In the above-listed palmprint recognition methods, it is obvious that multiview-based representation methods can adapt to more palmprint application environments. Among these methods, JMvFL, DC_MDPR, and SSL_RMPR achieved the best performances in generic palmprint recognition applications. These three methods enhanced palmprint representation by learning a latent sub-space across the different views, where a low-dimensionality feature vector could be attained. JMvFL was evaluated on the CASIA and TJI palmprint databases, as well as the PolyU Finger Knuckle print (FKP) database. JMvFL aimed to reduce the intra-class distance and increase the inter-class distance to cause the feature codes of different classes to be more separate. Furthermore, the variance of the inter-view feature codes was also maximized so that the multiview feature codes were more complementary to enhance their overall discriminant property. DC_MDPR was performed on the contactless databases, i.e., CASIA, IITD, and GPDS, and the palm–vein databases of PV_790 and M_NIR. The DC_MDPR joint learned a set of projection matrices to project the original features into the same sub space, which could simultaneously reduce the inter-view distances and the intra-class distances. Besides this, SSL_RMPR was conducted on the contactless palmprint databases, palm–vein database, and dorsal vein database, including CASIA, GPDS, TJI, PV_790, and DHV_860. Compared with JMvFL and DC_MDPR, SSL_RMPR significantly improved the recognition performances, since the discriminant palmprint representation could be adaptively enhanced by the elastic graph in the SSL_RMPR method.

3.3. Analysis

From the analysis on the previous multiview representation methods, it can be observed that multiview palmprint representation has more comprehensive advantages in comparision with the single-view representation methods. We briefly summarize the advantages of the multiview palmprint representation as follows:
(1)
Single-view palmprint representation methods usually have a satisfactory performance in a specific scene. However, when the application environment changes, the feature representation ability will decrease.
(2)
Since multiview feature learning can adopt different complementary types of features from diverse views, multiview palmprint representation methods can achieve stable recognition results by enhancing the palmprint feature expression.
(3)
Multiview palmprint recognition can adapt to more complex application scenarios, where single-view palmprint representation has significant application limitations.

4. Conclusions

In this paper, we report the motivations, aims, and framework of multiview palmprint representation in detail. Specifically, we completely introduce different types of open-set palmprint databases in real-world application scenarios. Besides this, some of the related works that motivate multiview representation methods most are introduced. To the best of our knowledge, this study is the first work to present a detailed and comprehensive summary and analysis of multiview palmprint recognition methods. From the previous analysis, it can be observed that multiview palmprint representation can adapt to more complex application scenarios in comparison with single-view palmprint recognition methods, since multiview learning-based methods can enhance palmprint expression by exploiting the complementary characteristics from diverse views. Besides this, multiview palmprint feature learning can handle the long-standing interference information in the images, i.e., noise, rotations, and shadows, while palmprint images are acquired in the open-set environments. In conclusion, multiview-based representation methods can adapt to more complex application environments with competitive and stable recognition performances.

Author Contributions

Conceptualization, S.Z. and L.F.; formal analysis, S.Z. and J.W.; writing original draft preparation, S.Z.; writing review and editing, S.Z. and J.W.; supervision, J.W.; funding acquisition, S.Z., L.F., and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China (Grant nos. 62106052, 62176066, and 62006059), and the China Postdoctoral Science Foundation (Grant 2022M710827).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their careful reading of the manuscript and their insightful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fei, L.; Zhang, B.; Xu, Y.; Guo, Z.; Wen, J.; Jia, W. Learning discriminant direction binary palmprint descriptor. IEEE Trans. Image Process. 2019, 28, 3808–3820. [Google Scholar] [CrossRef] [PubMed]
  2. Zhao, S.; Zhang, B. Learning complete and discriminative direction pattern for robust palmprint recognition. IEEE Trans. Image Process. 2020, 30, 1001–1014. [Google Scholar] [CrossRef]
  3. Ungureanu, A.S.; Salahuddin, S.; Corcoran, P. Toward unconstrained palmprint recognition on consumer devices: A literature review. IEEE Access 2020, 8, 86130–86148. [Google Scholar] [CrossRef]
  4. Genovese, A.; Piuri, V.; Plataniotis, K.N.; Scotti, F. PalmNet: Gabor-PCA Convolutional Networks for Touchless Palmprint Recognition. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3160–3174. [Google Scholar] [CrossRef] [Green Version]
  5. Zhang, L.; Li, L.; Yang, A.; Shen, Y.; Yang, M. Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognit. 2017, 69, 199–212. [Google Scholar] [CrossRef]
  6. Gao, Q.; Xia, W.; Wan, Z.; Xie, D.; Zhang, P. Tensor-SVD based graph learning for multi-view subspace clustering. Proc. AAAI Conf. Artif. Intell. 2020, 34, 3930–3937. [Google Scholar] [CrossRef]
  7. Chen, Y.; Xiao, X.; Zhou, Y. Multi-view subspace clustering via simultaneously learning the representation tensor and affinity matrix. Pattern Recognit. 2020, 106, 107441. [Google Scholar] [CrossRef]
  8. Chen, Y.; Xiao, X.; Zhou, Y. Jointly learning kernel representation tensor and affinity matrix for multi-view clustering. IEEE Trans. Multimed. 2020, 22, 1985–1997. [Google Scholar] [CrossRef]
  9. Zhao, S.; Zhang, B. Deep discriminative representation for generic palmprint recognition. Pattern Recognit. 2020, 98, 107071. [Google Scholar] [CrossRef]
  10. Zhong, D.; Du, X.; Zhong, K. Decade progress of palmprint recognition: A brief survey. Neurocomputing 2019, 328, 16–28. [Google Scholar] [CrossRef]
  11. Chowdhury, A.M.M.; Imtiaz, M.H. Contactless Fingerprint Recognition Using Deep Learning—A Systematic Review. J. Cybersecur. Priv. 2022, 2, 714–730. [Google Scholar] [CrossRef]
  12. Zhao, S.; Zhang, B. Robust adaptive algorithm for hyperspectral palmprint region of interest extraction. IET Biom. 2019, 8, 391–400. [Google Scholar] [CrossRef]
  13. Kumar, A.; Zhang, D. Personal recognition using hand shape and texture. IEEE Trans. Image Process. 2006, 15, 2454–2461. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Younesi, A.; Amirani, M.C. Gabor filter and texture based features for palmprint recognition. Procedia Comput. Sci. 2017, 108, 2488–2495. [Google Scholar] [CrossRef]
  15. Huang, D.-S.; Jia, W.; Zhang, D. Palmprint verification based on principal lines. Pattern Recognit. 2008, 41, 1316–1328. [Google Scholar] [CrossRef]
  16. Guo, Z.; Zhang, D.; Zhang, L.; Zuo, W. Palmprint verification using binary orientation co-occurrence vector. Pattern Recognit. Lett. 2009, 30, 1219–1227. [Google Scholar] [CrossRef]
  17. Jia, W.; Zhang, B.; Lu, J.; Zhu, Y.; Zhao, Y.; Zuo, W.; Ling, H. Palmprint recognition based on complete direction representation. IEEE Trans. Image Process. 2017, 26, 4483–4498. [Google Scholar] [CrossRef] [PubMed]
  18. Fei, L.; Xu, Y.; Zhang, B.; Fang, X.; Wen, J. Low-rank representation integrated with principal line distance for contactless palmprint recognition. Neurocomputing 2016, 218, 264–275. [Google Scholar] [CrossRef]
  19. Zhao, S.; Zhang, B. Learning salient and discriminative descriptor for palmprint feature extraction and identification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5219–5230. [Google Scholar] [CrossRef]
  20. Li, G.; Kim, J. Palmprint recognition with local micro-structure tetra pattern. Pattern Recognit. 2017, 61, 29–46. [Google Scholar] [CrossRef]
  21. Jia, W.; Ren, Q.; Zhao, Y.; Li, S.; Min, H.; Chen, Y. EEPNet: An efficient and effective convolutional neural network for palmprint recognition. Pattern Recognit. Lett. 2022, 159, 140–149. [Google Scholar] [CrossRef]
  22. Fei, L.; Zhang, B.; Teng, S.; Guo, Z.; Li, S.; Jia, W. Joint Multiview Feature Learning for Hand-Print Recognition. IEEE Trans. Instrum. Meas. 2020, 69, 9743–9755. [Google Scholar] [CrossRef]
  23. NTU Forensic Image Databases. Available online: https://github.com/BFLTeam/NTU_Dataset (accessed on 1 October 2019).
  24. Wang, X.; Fu, L.; Zhang, Y.; Wang, Y.; Li, Z. MMatch: Semi-supervised Discriminative Representation Learning For Multi-view Classification. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6425–6436. [Google Scholar] [CrossRef]
  25. Li, J.; Zhang, B.; Lu, G.; Zhang, D. Generative multi-view multi-feature learning for classification. Inf. Fusion 2019, 45, 215–256. [Google Scholar] [CrossRef]
  26. Li, J.; Li, Z.; Lu, G.; Xu, Y.; Zhang, D. Asymmetric gaussian process multi-view learning for visual classification. Inf. Fusion 2021, 65, 108–118. [Google Scholar] [CrossRef]
  27. Zhao, S.; Wu, J.; Fei, L.; Zhang, B.; Zhao, P. Double-cohesion learning based multiview and discriminant palmprint recognition. Inf. Fusion 2022, 83, 96–109. [Google Scholar] [CrossRef]
  28. Zhao, S.; Fei, L.; Wen, J.; Wu, J.; Zhang, B. Intrinsic and Complete Structure Learning Based Incomplete Multiview Clustering. IEEE Trans. Multimed. 2021. [Google Scholar] [CrossRef]
  29. Yang, M.; Deng, C.; Nie, F. Adaptive-weighting discriminative regression for multi-view classification. Pattern Recognit. 2019, 88, 236–245. [Google Scholar] [CrossRef]
  30. Tao, H.; Hou, C.; Yi, D.; Zhu, J. Multiview classification with cohesion and diversity. IEEE Trans. Cybern. 2018, 50, 2124–2137. [Google Scholar] [CrossRef]
  31. Shu, T.; Zhang, B.; Tang, Y.Y. Multi-view classification via a fast and effective multi-view nearest-subspace classifier. IEEE Access 2019, 7, 49669–49679. [Google Scholar] [CrossRef]
  32. Gupta, A.; Khan, R.U.; Singh, V.K.; Tanveer, M.; Kumar, D.; Chakraborti, A.; Pachori, R.B. A novel approach for classification of mental tasks using multiview ensemble learning (MEL). Neurocomputing 2020, 417, 558–584. [Google Scholar] [CrossRef]
  33. Lin, C.; Kumar, A. Contactless and partial 3D fingerprint recognition using multi-view deep representation. Pattern Recognit. 2018, 83, 314–327. [Google Scholar] [CrossRef]
  34. Zhao, S.; Fei, L.; Wen, J.; Zhang, B.; Zhao, P.; Li, S. Structure Suture Learning-Based Robust Multiview Palmprint Recognition. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef]
  35. Liang, X.; Li, Z.; Fan, D.; Zhang, B.; Lu, G.; Zhang, D. Innovative contactless palmprint recognition system based on dual-camera alignment. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 6464–6476. [Google Scholar] [CrossRef]
  36. CASIA Palmprint Image Database. Available online: http://biometrics.idealtest.org/ (accessed on 1 January 2018).
  37. GPDS Palmprint Image Database. Available online: http://www.gpds.ulpgc.es (accessed on 1 May 2016).
  38. Kumar, A. Incorporating cohort information for reliable palmprint authentication. In Proceedings of the 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Bhubaneswar India, 16–19 December 2008; pp. 583–590. [Google Scholar]
  39. REgim Sfax Tunisian Hand Database. Available online: http://www.regim.org/publications/databases/REST/ (accessed on 2 September 2020).
  40. Zhang, D.; Kong, W.-K.; You, J.; Wong, M. Online palmprint identification. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1041–1050. [Google Scholar] [CrossRef] [Green Version]
  41. Zhang, D.; Guo, Z.; Lu, G.; Zhang, L.; Zuo, W. An online system of multispectral palmprint verification. IEEE Trans. Instrum. Meas. 2009, 59, 480–490. [Google Scholar] [CrossRef] [Green Version]
  42. Zhao, S.; Zhang, B.; Chen, C.P. Joint deep convolutional feature representation for hyperspectral palmprint recognition. Inf. Sci. 2019, 489, 167–181. [Google Scholar] [CrossRef]
  43. Zhao, S.; Zhang, B. Joint constrained least-square regression with deep convolutional feature for palmprint recognition. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 511–522. [Google Scholar] [CrossRef]
  44. Harun, N.; Abd Rahman, W.E.Z.W.; Abidin, S.Z.Z.; Othman, P.J. New algorithm of extraction of palmprint region of interest (ROI). J. Phys. Conf. Ser. 2017, 890, 012024. [Google Scholar] [CrossRef] [Green Version]
  45. Li, Q.; Lai, H.; You, J. A novel method for touchless palmprint ROI extraction via skin color analysis. In Recent Trends in Intelligent Computing, Communication and Devices; Springer: Singapore, 2020; pp. 271–276. [Google Scholar]
  46. Chai, T.; Wang, S.; Sun, D. A palmprint ROI extraction method for mobile devices in complex environment. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Chengdu, China, 6–10 November 2016; pp. 1342–1346. [Google Scholar]
  47. Mokni, R.; Drira, H.; Kherallah, M. Combining shape analysis and texture pattern for palmprint identification. Multimed. Tools Appl. 2017, 6, 23981–24008. [Google Scholar] [CrossRef]
  48. Gao, F.; Cao, K.; Leng, L.; Yuan, Y. Mobile palmprint segmentation based on improved active shape model. J. Multimed. Inf. Syst. 2018, 5, 221–228. [Google Scholar]
  49. Aykut, M.; Ekinci, M. Developing a contactless palmprint authentication system by introducing a novel ROI extraction method. Image Vis. Comput. 2015, 40, 65–74. [Google Scholar] [CrossRef] [Green Version]
  50. Kazemi, V.; Sullivan, J. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1867–1874. [Google Scholar]
  51. Matkowski, W.M.; Chai, T.; Kong, A.W.K. Palmprint recognition in uncontrolled and uncooperative environment. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1601–1615. [Google Scholar] [CrossRef] [Green Version]
  52. Izadpanahkakhk, M.; Razavi, S.; Taghipour-Gorjikolaie, M.; Zahiri, S.; Uncini, A. Deep region of interest and feature extraction models for palmprint verification using convolutional neural networks transfer learning. Appl. Sci. 2018, 8, 1210. [Google Scholar] [CrossRef] [Green Version]
  53. Liu, Y.; Kumar, A. A deep learning based framework to detect and recognize humans using contactless palmprints in the wild. arXiv 2018, arXiv:1812.11319. [Google Scholar]
  54. Leng, L.; Gao, F.; Chen, Q.; Kim, C. Palmprint recognition system on mobile devices with double-line-single-point assistance. Pers. Ubiquitous Comput. 2018, 22, 93–104. [Google Scholar] [CrossRef]
  55. Afifi, M. 11K hands: Gender recognition and biometric identification using a large dataset of hand images. Multimed. Tools Appl. 2019, 78, 20835–20854. [Google Scholar] [CrossRef] [Green Version]
  56. Xiao, Q.; Lu, J.; Jia, W.; Liu, X. Extracting palmprint ROI from whole hand image using straight line clusters. IEEE Access 2019, 7, 74327–74339. [Google Scholar] [CrossRef]
  57. Lin, S.; Xu, T.; Yin, X. Region of interest extraction for palmprint and palm vein recognition. In Proceedings of the 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 538–542. [Google Scholar]
  58. Michele, A.; Colin, V.; Santika, D. Mobilenet convolutional neural networks and support vector machines for palmprint recognition. Procedia Comput. Sci. 2019, 157, 110–117. [Google Scholar] [CrossRef]
  59. Daas, S.; Yahi, A.; Bakir, T.; Sedhane, M.; Boughazi, M.; Bourennane, E. Multimodal biometric recognition systems using deep learning based on the finger vein and finger knuckle print fusion. IET Image Process. 2021, 14, 3859–3868. [Google Scholar] [CrossRef]
  60. Liu, Y.; Kumar, A. Contactless palmprint identification using deeply learned residual features. IEEE Trans. Biom. Behav. Identity Sci. 2020, 2, 172–181. [Google Scholar] [CrossRef]
  61. Idrssi, A.E.; Merabet, Y.E.; Ruichek, Y. Palmprint recognition using state-of-the-art local texture descriptors: A comparative study. IET Biom. 2009, 9, 143–153. [Google Scholar] [CrossRef]
  62. Wu, X.; Zhang, D.; Wang, K.; Huang, B. Palmprint classification using principal lines. Pattern Recognit. 2004, 37, 1987–1998. [Google Scholar] [CrossRef]
  63. Sun, Z.; Tan, T.; Wang, Y.; Li, S.Z. Ordinal palmprint represention for personal identification. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2005, 1, 279–284. [Google Scholar]
  64. Fei, L.; Zhang, B.; Zhang, W.; Teng, S. Local apparent latent direction extraction for palmprint recognition. Inform. Sci. 2019, 473, 59–72. [Google Scholar] [CrossRef]
  65. Annadurai, C.; Nelson, I.; Devi, K.N.; Manikandan, R.; Jhanjhi, N.Z.; Masud, M.; Sheikh, A. Biometric Authentication-Based Intrusion Detection Using Artificial Intelligence Internet of Things in Smart City. Energies 2022, 15, 7430. [Google Scholar] [CrossRef]
  66. Abdullahi, S.B.; Khunpanuk, C.; Bature, Z.A.; Chiroma, H.; Pakkaranang, N.; Abubakar, A.B.; Ibrahim, A.H. Biometric Information Recognition Using Artificial Intelligence Algorithms: A Performance Comparison. IEEE Access 2022, 10, 49167–49183. [Google Scholar] [CrossRef]
  67. Chen, Y.; Yi, Z. Locality-constrained least squares regression for subspace clustering. Knowl.-Based Syst. 2019, 163, 51–56. [Google Scholar] [CrossRef]
  68. Wang, M.; Wang, Q.; Hong, D.; Roy, S.K.; Chanussot, J. Learning Tensor Low-Rank Representation for Hyperspectral Anomaly Detection. IEEE Trans. Cybern. 2022, 53, 679–691. [Google Scholar] [CrossRef] [PubMed]
  69. Wu, D.; Chang, W.; Lu, J.; Nie, F.; Wang, R.; Li, X. Adaptive-order proximity learning for graph-based clustering. Pattern Recognit. 2022, 126, 108550. [Google Scholar] [CrossRef]
  70. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C.; Kot, A.C. Low-rankness guided group sparse representation for image restoration. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef] [PubMed]
  71. Zhao, S.; Zhang, B.; Li, S. Discriminant sparsity based least squares regression with l1 regularization for feature representation. In Proceedings of the IEEE International Conference on Acoustics, Speech Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1504–1508. [Google Scholar]
  72. Zhao, S.; Wu, J.; Zhang, B.; Fei, L. Low-rank inter-class sparsity based semi-flexible target least squares regression for feature representation. Pattern Recognit. 2022, 123, 108346. [Google Scholar] [CrossRef]
  73. Liu, G.; Yan, S.C. Latent low-rank representation for subspace segmentation and feature extraction. In Proceedings of the, Barcelona, Spain, 6–13 November 2011; pp. 1615–1622. [Google Scholar]
  74. Zhang, Y.; Xiang, M.; Yang, B. Low-rank preserving embedding. Pattern Recognit. 2017, 70, 112–125. [Google Scholar] [CrossRef]
  75. Wong, W.K.; Lai, Z.; Wen, J.; Fang, X.; Lu, Y. Low-rank embedding for robust image feature extraction. IEEE Trans. Image Process. 2017, 26, 2905–2917. [Google Scholar] [CrossRef]
  76. Wu, Z.; Liu, S.; Ding, C.; Ren, Z.; Xie, S. Learning graph similarity with large spectral gap. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 51, 1590–1600. [Google Scholar] [CrossRef]
  77. Li, Z.; Nie, F.; Chang, X.; Yang, Y.; Zhang, C.; Sebe, N. Dynamic affinity graph construction for spectral clustering using multiple features. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 6323–6332. [Google Scholar] [CrossRef]
  78. Lu, J.; Wang, H.; Zhou, J.; Chen, Y.; Lai, Z.; Hu, Q. Low-rank adaptive graph embedding for unsupervised feature extraction. Pattern Recognit. 2021, 113, 107758. [Google Scholar] [CrossRef]
  79. Qiao, L.; Chen, S.; Tan, X. Sparsity preserving projections with applications to face recognition. Pattern Recognit. 2010, 43, 331–341. [Google Scholar] [CrossRef] [Green Version]
  80. Yang, W.; Wang, Z.; Sun, C. A collaborative representation based projections method for feature extraction. Pattern Recognit. 2015, 48, 20–27. [Google Scholar] [CrossRef]
  81. Elhamifar, E.; Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2765–2781. [Google Scholar] [CrossRef] [Green Version]
  82. Zheng, S.; Cai, X.; Ding, C.; Nie, F.; Huang, H. A closed form solution to multi-view low-rank regression. In Proceedings of the in AAAI, Austin, TX, USA, 25–29 January 2015; pp. 1973–1979. [Google Scholar]
  83. Yaxin, Z.; Huanhuan, L.; Xuefei, G.; Lili, L. Palmprint recognition based on multi-feature integration. In Proceedings of the 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi‘an, China, 3–5 October 2016; pp. 992–995. [Google Scholar]
  84. Zhao, S.; Nie, W.; Zhang, B. Multi-feature fusion using collaborative residual for hyperspectral palmprint recognition. In Proceedings of the 2018 IEEE 4th International Conference on Computer and Communications (ICCC), Chengdu, China, 7–10 December 2018; pp. 1402–1406. [Google Scholar]
  85. Zheng, Y.; Fei, L.; Wen, J.; Teng, S.; Zhang, W.; Rida, I. Joint Multiple-type Features Encoding for Palmprint Recognition. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, ACT, Australia, 1–4 December 2020; pp. 1710–1717. [Google Scholar]
  86. Jaswal, G.; Kaul, A.; Nath, R. Multiple feature fusion for unconstrained palmprint authentication. Comput. Electr. Eng. 2018, 72, 53–78. [Google Scholar] [CrossRef]
  87. Dai, J.; Zhou, J. Multifeature-based high-resolution palmprint recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 945–957. [Google Scholar]
  88. Zhou, Q.; Jia, W.; Yu, Y. Multi-stream Convolutional Neural Networks Fusion for Palmprint Recognition. In Proceedings of the Chinese Conference on Biometric Recognition, Beijing, China, 11–13 November 2022; pp. 72–81. [Google Scholar]
  89. Liang, L.; Chen, T.; Fei, L. Orientation space code and multi-feature two-phase sparse representation for palmprint recognition. Int. J. Mach. Learn. Cybern. 2020, 11, 1453–1461. [Google Scholar] [CrossRef]
  90. Jia, W.; Ling, B.; Chau, K.W.; Heutte, L. Palmprint identification using restricted fusion. Appl. Math. Comput. 2008, 205, 927–934. [Google Scholar] [CrossRef] [Green Version]
  91. Fei, L.; Zhang, B.; Zhang, L.; Jia, W.; Wen, J.; Wu, J. Learning compact multifeature codes for palmprint recognition from a single training image per palm. IEEE Trans. Multimed. 2020, 23, 2930–2942. [Google Scholar] [CrossRef]
  92. Gayathri, R.; Ramamoorthy, P. Multifeature palmprint recognition using feature level fusion. Int. J. Eng. Res. Appl. 2012, 2, 1048–1054. [Google Scholar]
  93. You, J.; Kong, W.K.; Zhang, D.; Cheung, K.H. On hierarchical palmprint coding with multiple features for personal identification in large databases. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 234–243. [Google Scholar] [CrossRef]
  94. Badrinath, G.S.; Gupta, P. An efficient multi-algorithmic fusion system based on palmprint for personnel identification. In Proceedings of the 15th International Conference on Advanced Computing and Communications (ADCOM 2007), Guwahati, India, 18–21 December 2007; pp. 759–764. [Google Scholar]
  95. Zhou, J.; Sun, D.; Qiu, Z.; Xiong, K.; Liu, D.; Zhang, Y. Palmprint recognition by fusion of multi-color components. In Proceedings of the 2009 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, Zhangjiajie, China, 10–11 October 2009; pp. 273–278. [Google Scholar]
  96. Zhang, S.; Wang, H.; Huang, W. Palmprint identification combining hierarchical multi-scale complete LBP and weighted SRC. Soft Comput. 2020, 24, 4041–4053. [Google Scholar] [CrossRef]
  97. Zhou, L.; Guo, H.; Lin, S.; Hao, S.; Zhao, K. Combining multi-wavelet and CNN for palmprint recognition against noise and misalignment. IET Image Process. 2019, 13, 1470–1478. [Google Scholar] [CrossRef]
  98. Zhang, L.; Shen, Y.; Li, H.; Lu, J. 3D palmprint identification using block-wise features and collaborative representation. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1730–1736. [Google Scholar] [CrossRef] [Green Version]
  99. Wu, L.; Xu, Y.; Cui, Z.; Zuo, Y.; Zhao, S.; Fei, L. Triple-type feature extraction for palmprint recognition. Sensors 2021, 21, 4896. [Google Scholar] [CrossRef] [PubMed]
  100. Ahmad, M.I.; Woo, W.L.; Dlay, S. Non-stationary feature fusion of face and palmprint multimodal biometrics. Neurocomputing 2016, 177, 49–61. [Google Scholar] [CrossRef]
  101. Izadpanahkakhk, M.; Razavi, S.M.; Taghipour-Gorjikolaie, M.; Zahiri, S.H.; Uncini, A. Joint feature fusion and optimization via deep discriminative model for mobile palmprint verification. J. Electron. Imaging 2019, 28, 043026. [Google Scholar] [CrossRef]
  102. Attallah, B.; Brik, Y.; Chahir, Y.; Djerioui, M.; Boudjelal, A. Fusing palmprint, finger-knuckle-print for bi-modal recognition system based on LBP and BSIF. In Proceedings of the 2019 6th International Conference on Image and Signal Processing and their Applications (ISPA), Mostaganem, Algeria, 24–25 November 2019; pp. 1–5. [Google Scholar]
  103. Li, Z.; Liang, X.; Fan, D.; Li, J.; Zhang, D. BPFNet: A unified framework for bimodal palmprint alignment and fusion. In Proceedings of the International Conference on Neural Information Processing, Bali, Indonesia, 8–12 December 2021; pp. 28–36. [Google Scholar]
  104. Rane, M.E.; Bhadade, U. Face and palmprint Biometric recognition by using weighted score fusion technique. In Proceedings of the 2020 IEEE Pune Section International Conference (PuneCon), Pune, India, 16–18 December 2020; pp. 11–16. [Google Scholar]
  105. Fei, L.; Qin, J.; Liu, P.; Wen, J.; Tian, C.; Zhang, B.; Zhao, S. Jointly Learning Multiple Curvature Descriptor for 3D Palmprint Recognition. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 302–308. [Google Scholar]
Figure 1. Palmprint samples with different types of interference information in the backgrounds [23].
Figure 1. Palmprint samples with different types of interference information in the backgrounds [23].
Mathematics 11 01261 g001
Figure 2. Categories of palmprint images from (a) contact-based, (b) contactless, (c) hyperspectral, (d) high-resolution, (e) 3D, and (f) palm–vein palmprint databases, respectively.
Figure 2. Categories of palmprint images from (a) contact-based, (b) contactless, (c) hyperspectral, (d) high-resolution, (e) 3D, and (f) palm–vein palmprint databases, respectively.
Mathematics 11 01261 g002
Figure 3. The brief overview of acquirement for different types palmprint databases.
Figure 3. The brief overview of acquirement for different types palmprint databases.
Mathematics 11 01261 g003
Figure 4. Different types of original palmprint images, where the samples from (ah) are from the contactless palmprint databases of (a) CASIA [36], (b) GPDS [37], (c) TJI [5], (d) IITD [38], (e) NTU [23], contact-based palmprint database of (f) PolyU [40], near-infrared palmprint databases of (g) PV_790 [19], and (h) M_NIR [41], respectively.
Figure 4. Different types of original palmprint images, where the samples from (ah) are from the contactless palmprint databases of (a) CASIA [36], (b) GPDS [37], (c) TJI [5], (d) IITD [38], (e) NTU [23], contact-based palmprint database of (f) PolyU [40], near-infrared palmprint databases of (g) PV_790 [19], and (h) M_NIR [41], respectively.
Mathematics 11 01261 g004
Figure 5. The brief pipleline of ROI extraction from the palmprint image.
Figure 5. The brief pipleline of ROI extraction from the palmprint image.
Mathematics 11 01261 g005
Figure 6. Some typical palmprint ROI samples, where the samples from the first to the fifth lines are from the CASIA [36], GPDS [37], TJI [5], IITD [38], PolyU [40], PV_790 [19], and M_NIR [41] databases, respectively.
Figure 6. Some typical palmprint ROI samples, where the samples from the first to the fifth lines are from the CASIA [36], GPDS [37], TJI [5], IITD [38], PolyU [40], PV_790 [19], and M_NIR [41] databases, respectively.
Mathematics 11 01261 g006
Figure 7. The brief pipleline of local orientation feature extraction from the ROI image.
Figure 7. The brief pipleline of local orientation feature extraction from the ROI image.
Mathematics 11 01261 g007
Figure 8. The pipleline of the multiview palmprint feature learning framework.
Figure 8. The pipleline of the multiview palmprint feature learning framework.
Mathematics 11 01261 g008
Table 1. Detailed descriptions on different types of palmprint databases.
Table 1. Detailed descriptions on different types of palmprint databases.
DatabasesTotal NumberIndividual NumberContactless or Contact BasedPosing VariationYear
CASIA5501310ContactlessSmall2005
GPDS1000100ContactlessSmall2011
TJI12,000300ContactlessSmall2017
NTU-PI-V177811093ContactlessLarge2019
REST1945179ContactlessSmall2021
IITD2601230ContactlessSmall2006
PolyU6000500Contact basedNo2003
PV_79051802109Contact basedNo2018
M_NIR6000500Contact basedNo2009
M_Blue6000500Contact basedNo2009
Table 2. Overview of palmprint ROI extraction methods.
Table 2. Overview of palmprint ROI extraction methods.
MethodsAuthorsRef.Year
Active shape modelGao, F.; Cao, K.[48]2018
Active appearance modelAykut, M.; Ekinci, M.[49]2015
Ensemble of Regression TreesKazemi, V.; Sullivan, J.[50]2014
Key point detectionMatkowski, W. M.; Chai, T.; Kong, A. W. K.[51]2019
CNN TransferIzadpanahkakhk, M.; Razavi, S[52]2018
Palmprint DetectorLiu, Y.; Kumar, A.[53]2018
On-screen guideLeng, L.; Gao, F.; Chen, Q.; Kim, C[54]2018
Whole imageAfifi, M.[55]2019
Robust adaptive hyperspectral ROI extractionZhao, S.; Zhang, B.[12]2019
Table 3. Overview of single-view palmprint representation methods.
Table 3. Overview of single-view palmprint representation methods.
MethodsRef.YearDescriptionClassifierDatabase
LDDBP[1]2019Dominant direction learningChi-square distanceCASIA IITD TJI HFUT
LCDDP[2]2021Complete and discriminative directionEuclidean distancePolyU IITD CASIA GPDS REST M_NIR PV_780
Local texture descriptor[61]2020Local orientation encoding L 1 distanceTJI CASIA GPDS IITD
Principal lines[62]2004Principal lines detectionChi-square distancePolyU
Ordinal palmprint feature[63]2005Ordinal direction encodingHamming distancePolyU
DDR[9]2020Deep discriminative representationEuclidean distanceCASIA, IITD, M_NIR, M_B, M_G, M_R
JDCFR[42]2019Joint deep representationEuclidean distanceHyperspectral palmprint database
PalmNet[4]2019Deep palmprint representationEuclidean distanceCASIA, IITD, REST, TJI
ALDC[64]2019Local apparent latent directionChi-square distancePolyU, IITD, GPDS, CASIA
Table 4. Overview of advanced technoiques that mostly motivate the multiview learning methods.
Table 4. Overview of advanced technoiques that mostly motivate the multiview learning methods.
ClassMethodsRef.YearBrief DescriptionContributions to MFL
LSRLC_LSR[67]2019Local structure preservationProvide a
consistency
subspace
for feature
enhancement.
DS_LSR[71]2020Self-representation for local structure capture
LIS_LSR[72]2022Low-rank structure learning for robust representation
LR-RLRR[73]2011Latent low-rank structure representationProvide a
low-rank
representation
for robust
recognition
LRP[74]2017Low-rank preserving embedding
LR_IMF[75]2017Low-rank projection learning
Graph LearningLRS_LSG[76]2021Learning graph similarityUtilize the
local structures
between different
samples for
representation
enhancement
DAG_SC[77]2018Dynamic affinity graph construction
LRAG[78]2021Low-rank adaptive graph embedding
SRSPP[79]2010Sparsity preserving projectionsStructure
reconstruction
and robust
representation
CRP[80]2015Collaborative-representation-based projections
SSL[81]2013Sparse subspace clustering
Table 5. Overview of multiview palmprint representation methods
Table 5. Overview of multiview palmprint representation methods
MethodsRef.YearDescriptionDatabasesApplication Categories
DC_MDPR[27]2022Descriminative representation with double-cohesion learningCASIA, IITD, GPDS, TJI, M_NIR, PV_790Contact based, contactless, palm–vein
SSL_RMPR[34]2022Multiview representation in the same sub spaceCASIA, IITD, GPDS, TJI, PV_790, DHV_860Contact based, contactless, palm–vein, dorsal hand vein
JMvFL[22]2020Joint multiview feature learningPolyU, CASIA, TJI, GPDS, PolyU_FKPContact-based, contactless, finger-knuckle-print
PR_MFR[83]2016Multi-feature integration with local and global featuresPolyUContact based
MFF_CRPR[84]2018Multi-feature fusion using collaborative residualHyperspectral palmprint databaseHyperspectral
GMF Encoding[85]2020Joint multiple-type features encodingCASIA, IITD, TJIContactless
MFF_UPR[86]2018Multiple feature fusion on feature levelCASIA, IITD, PolyUContact based, contactless
MHPR[87]2010Feature fusion with four types of featuresHigh-resolution palmprint databaseHigh resolution
MSCNN[88]2022Multi-stream CNN fausionPolyU, M_B, HFUT, TJIContact based, contactless
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, S.; Fei, L.; Wen, J. Multiview-Learning-Based Generic Palmprint Recognition: A Literature Review. Mathematics 2023, 11, 1261. https://doi.org/10.3390/math11051261

AMA Style

Zhao S, Fei L, Wen J. Multiview-Learning-Based Generic Palmprint Recognition: A Literature Review. Mathematics. 2023; 11(5):1261. https://doi.org/10.3390/math11051261

Chicago/Turabian Style

Zhao, Shuping, Lunke Fei, and Jie Wen. 2023. "Multiview-Learning-Based Generic Palmprint Recognition: A Literature Review" Mathematics 11, no. 5: 1261. https://doi.org/10.3390/math11051261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop