Next Article in Journal
Ultrasonic Flaw Echo Enhancement Based on Empirical Mode Decomposition
Next Article in Special Issue
Multi-Layer Feature Based Shoeprint Verification Algorithm for Camera Sensor Images
Previous Article in Journal
Fuzzy Ontology and LSTM-Based Text Mining: A Transportation Network Monitoring System for Assisting Travel
Previous Article in Special Issue
Digital Images Authentication Technique Based on DWT, DCT and Local Binary Patterns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Recognition Approach for Noisy Multispectral Palmprint by Robust L2 Sparse Representation with a Tensor-Based Extreme Learning Machine

1
School of Electronics and Information Engineering, MOE Key Lab for Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an 710049, China
2
Guangdong Xi’an Jiaotong University Academy, No. 3, Shuxiangdong Road, Daliang, Foshan 528000, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(2), 235; https://doi.org/10.3390/s19020235
Submission received: 3 December 2018 / Revised: 2 January 2019 / Accepted: 7 January 2019 / Published: 9 January 2019

Abstract

:
For the past decades, recognition technologies of multispectral palmprint have attracted more and more attention due to their abundant spatial and spectral characteristics compared with the single spectral case. Enlightened by this, an innovative robust L2 sparse representation with tensor-based extreme learning machine (RL2SR-TELM) algorithm is put forward by using an adaptive image level fusion strategy to accomplish the multispectral palmprint recognition. Firstly, we construct a robust L2 sparse representation (RL2SR) optimization model to calculate the linear representation coefficients. To suppress the affection caused by noise contamination, we introduce a logistic function into RL2SR model to evaluate the representation residual. Secondly, we propose a novel weighted sparse and collaborative concentration index (WSCCI) to calculate the fusion weight adaptively. Finally, we put forward a TELM approach to carry out the classification task. It can deal with the high dimension data directly and reserve the image spatial information well. Extensive experiments are implemented on the benchmark multispectral palmprint database provided by PolyU. The experiment results validate that our RL2SR-TELM algorithm overmatches a number of state-of-the-art multispectral palmprint recognition algorithms both when the images are noise-free and contaminated by different noises.

1. Introduction

Palmprint recognition technologies have become a novel biometric approach and have attracted increasingly attention in recent years. In comparison with some other biological features (i.e., the iris and fingerprints, etc.), palmprints have a larger collection area with more abundant information. Besides, palmprints possess the characteristics of uniqueness, stability, scalability and non-contact acquisition, etc. As a consequence, they have strong anti-noise capability and efficient discrimination performance.
The current palmprint recognition algorithms can be mainly categorized into various sorts, such as subspace-based methods, feature-based methods and sparse representation-based classification (SRC) methods, etc. The subspace-based methods [1,2,3,4,5,6,7,8,9,10] adopt dimension reduction theory to accomplish the feature space transformation. This can reduce the data complexity and efficiently improve the discrimination of image characteristics. The conventional subspace transformation methods mainly includes the principal component analysis (PCA) [1], linear discriminant analysis (LDA) [4] and independent component analysis (ICA) [7], etc. However, due to their sensitivity to lighting, noise and other contaminations, the conventional linear discriminant methods already don’t meet the requirements of actual palmprint recognition problems. To address these issues, a nonlinear spatial structure transformation technique, namely the kernel PCA method [9,10], was introduced into the palmprint recognition field. In addition, lots of feature-based methods were presented to implement palmprint recognition tasks. For instance, the coding-based method [11,12,13,14,15,16,17] has been extensively researched in the past decades. In these studies, the palmprint features were extracted by using the coding of some filtering results. The common coding methods include binarized statistical image features (BSIF) [13], double-orientation code (DOC) [14] and block dominate orientation code [17], etc. Minaee et al. [18] proposed a palmprint recognition algorithm by using the deep scattering network which achieved fine recognition performance. Some other feature- based methods [19,20,21,22,23,24] mainly take advantage of the statistic characteristics, such as mean, variance, and covariance and so on, to implement palmprint recognition. In recent years, the linear representation methods based on sparse theory [25] were proposed and popularly applied to the palmprint recognition problem [26,27,28,29,30,31]. These methods consider a testing sample as a linear representation of the training set. That is, a given testing sample was anticipated to be approximately expressed by the training samples lied in a unitary class. This can be effectively accomplished by imposing the sparseness constraint on the approximate representation with the training samples.
For the sake of higher recognition accuracy, some multispectral palmprint recognition methods [32,33,34,35,36,37,38,39,40,41,42,43,44] have been studied. Because the collected images under different spectra contain more plentiful feature information, the recognition rate can be effectively improved. In these studies, different fusion strategies were utilized to increase the recognition accuracy. The conventional multispectral palmprint recognition methods can be mainly categorized into image level fusion strategies and matching score level fusion strategies. The basic idea of image level fusion is to decompose the images under different spectra at the start, then integrate these separated decompositions for a compound approximation and reconstruct the fusion image through the inverse transformation to implement the recognition task. Based on this, Han et al. [32] used the discrete wavelet transform (DWT) method to decompose palmprint images acquired under different spectra, and then reconstructed the fused palmprint image to accomplish the multispectral palmprint recognition. Xu et al. [37] introduced the quaternion matrix to represent the palmprint images under different spectra, and then extended PCA and DWT into the quaternion domain to implement feature extraction. Finally, the Euclidean distance was used to perform the recognition task. Gumaei et al. [38] employed an autoencoder with the regularized extreme learning machine (AE-RELM) to accomplish the multispectral palmprint recognition and effectively improve the accuracy. Xu et al. [39] presented a novel multispectral palmprint recognition algorithm. They used the digital shearlet transform (DST) to implement the image fusion and proposed a multiclass projection ELM (MPELM) to accomplish the classification task. For the score level fusion method, the matching scores are obtained separately by a comparator for different spectral bands firstly, then the obtained matching scores are fused by utilizing some rules and accomplish the classification based on the fusion score. Zhang et al. [41] presented a novel algorithm named line orientation-based coding (LOC) to extract the featurew of the palmprint images with different spectrq, and then carried out the recognition task with a matching level fusion rule. Minaee et al. [42] used the co-occurrence matrix to extract the texture features, then employed the minimum distance classifier (MDC) and weighted majority voting system (WMV) to accomplish the multispectral palmprint recognition. Minaee et al. [43] presented a set of wavelet-DCT features for multispectral palmprint recognition. Although many achievements have been made in the study of multispectral palmprint recognition, there are still many open questions that need to be further studied. For example, how to increase the recognition accuracy when the collected images are contaminated by different noises.
Inspired by the these studes, in this article, we present a novel robust L2 sparse representation with a tensor-based extreme learning machine (RL2SR-TELM) algorithm by using an adaptive image level fusion strategy to accomplish the multispectral palmprint recognition. The key contributions of our algorithm can be summarized as follows: Firstly, a robust L2 norm-based sparse representation model is constructed to calculate the linear representation coefficients. It overcomes the defects of high computational complexity of the L1 norm regularization and the lack of robustness to noise contamination. Secondly, an adaptive weighted method is presented to accomplish the fusion of multispectral palmprint images at the image level. In this method, a weighted sparse and collaborative concentration index (WSCCI) is proposed that can quantify the multispectral palmprint image discrimination efficiently. By using the robust sparse coefficients and WSCCI, an adaptive weighted fusion strategy is proposed to reconstruct the fused palmprint image. Finally, aiming at the high order signal classification problem, we extend the conventional ELM [45] into the tensor space, then put forward a novel TELM method. It inherits the advantages of the conventional ELM (i.e., excellent learning speed and generalization performance) which achieves an outstanding recognition efficiency.
The rest of this paper is organized as follows: in Section 2, we introduce the principle of multispectral palmprint acquisition device. Then we discuss our proposed RL2SR-TELM algorithm in Section 3. In Section 4, simulation experiments and the result analysis of our proposed algorithm are illustrated in detail. Section 5 concludes this paper.

2. Acquisition Device of Multispectral Palmprint Images

The Biometrics Research Centre (BRC) of Hong Kong Polytechnic University (PolyU) has developed an acquisition device [46] for multispectral palmprints. It can collect the palmprint images using the Blue, Green, Red and Near Infrared (NIR) spectra, respectively. Figure 1 illustrates the principle of the acquisition device. It mainly includes a multispectral light source module, a light source control module, a CCD imaging sensor, an image acquisition module (A/D conversion module) and an image display module, etc. The multispectral light source module locates at the bottom of the device and consists of four monochromatic light sources. The light controller module controls the multispectral light and enables CCD imaging module to acquire palmprint images under different spectrums. The image acquisition module captures the multispectral palmprint images and converts analog image into a digital one by an A/D conversion. Figure 2 shows the acquired palmprint images with different spectrums.

3. Proposed Algorithm

Figure 3 illustrates the flowchart of the presented RL2SR-TELM algorithm. It can be mainly separated into the following steps: Firstly, the acquired multispectral palmprint image is preprocessed to obtain the region of interest (ROI) of the image. Then, we calculate the sparse representation coefficients of sample images under different spectra by utilizing the proposed robust L2 sparse representation method. After that, an adaptive weighted fusion strategy is presented to obtain the fused images. Finally, by integrating the tensor theory with ELM, we propose a TELM method to complete the recognition task.

3.1. Robust L2 Sparse Representation Method

3.1.1. SRC Model

The sparse representation idea was introduced into the biometric recognition for the first time in 2009 by Wright et al. Given the training set matrix denoted as X = [ x 1 , x 2 , , x n ] R d × n , where x i is a training sample, d and n denote the training sample dimension and number, respectively. For any given testing sample y R d , we suppose that it can be coded over the training matrix X approximately, then the SRC model can be described as:
α = arg min α α 0 ,   s . t .    y = X α ,
where α R n is the representation coefficient, α 0 denotes the L0 norm and it counts the nonzero element number of the vector. The objective of SRC is to find as fast as possible as sparse coefficient α which can represent the testing sample over the training set. Model (1) is a NP hard problem and theoretically intractable. Reference [47] has proved that when the representation coefficient is sparse enough, the L0 norm can be approximately represented by using the L1 norm. On the basis of this theory, Wright et al. proposed the following model:
α = arg min α α 1 ,   s . t .    y = X α .
This is a classical model and it has been extensively used in various areas including the image reconstruction, image de-noising, compressive sensing and machine learning, and so on. Although many scholars have devoted themselves to this algorithm and proposed largely improvements, the drawback of inefficiency is still not completely resolved.

3.1.2. Robust L2 Sparse Representation Method

A SRC model actually supposes that the coding residual obeys a Gaussian or Laplacian probability density function distribution. However, this hypothetical description is not always accurate enough in practice. In addition, the SRC needs to solve the L1 regularization problem and its calculation speed is very slow. To address these drawbacks, many researchers have proposed a lot of improved SRC algorithms. For examples, Yang et al. [48] proposed a novel sparse representation method that solved the sparse representation problem by using the maximum likelihood estimation (MLE) method. It can deal with the occlusion and outliers more robustly. Xu et al. [49] made use of the L2 regularization to acquire the sparse coefficient and proposed a new discriminative sparse representation method (DSRM). Inspired by these ideas, we propose a novel robust L2 regularization based sparse representation method, namely RL2SR.
Suppose that there are s different spectral bands, the class number of each spectral palmprint is C and each class has m training samples. Thus, there are N = m C training samples for each spectrum. Vectorize the training sample into the d - dimensional column vector, then the training sample matrix can be denoted as X = [ X 1 , , X i , , X C ] , where X i = [ x m ( i 1 ) + 1 1 , x m ( i 1 ) + 1 2 , , x m ( i 1 ) + 1 s , , x m i 1 , x m i 2 , , x m i s ] is the training sample sub-matrix of the i-th class, x m i 1 , x m i 2 , , x m i s are the (m×i)-th training samples under different spectra. Then, given any testing sample y l , where l = 1 , 2 , , s denotes the spectral bands, we can construct the following optimization problem:
arg min A l ρ ( y l X A l ) + λ ϕ ( A l ) , ( l = 1 , 2 , , s ) ,
where λ > 0 is a constant namely regularization parameter which can balance the representation residual term and the regularization term. Here, A l = [ A 1 l ; A 2 l ; ; A C l ] is the linear representation coefficient with respect to the testing sample y l over the training set.
For the first term of the optimization function (3), it can be denoted as ρ ( e l ) = ρ ( y l X A l ) , where ρ ( ) : R d R . Then:
ρ ( e l ) = k = 1 d ρ ( e k l ) ,
where e k l = | y k l X k A l | denotes the residual term with respect to the k th element between y l and its approximate linear representation X A l . y k l and X k are the k-th element of the testing sample and the k th row of the training set matrix, respectively. In general, the residual function ρ ( ) is designed to minimize the effect generated by the occlusion and outliers. Huber, Cauchy and Welsch functions can be used to express the residual function. In reference [48], Yang et al. utilized the logistic function to describe the residual information and got satisfactory performance. The logistic function can be expressed as follows:
ρ ( e k l ) = 1 2 μ ( ln ( 1 + exp ( μ ( e k l ) 2 + μ δ ) ) ln ( 1 + exp ( μ δ ) ) )
where μ and δ are the positive parameters. The selection of parameters μ and δ will be discussed in Section 4.2. In order to solve question (3), we derivative ρ ( y l X A l ) with respect to Ai, then we have:
d ρ ( y l X A l ) d A l = k = 1 d d ρ ( e k l ) d A l = k = 1 d d ρ ( e k l ) d e k l d e k l d A l = 1 2 k = 1 d d ρ ( e k l ) d e k l 1 e k l d ( e k l ) 2 d A l = 1 2 k = 1 d ω ( e k l ) d ( e k l ) 2 d A l .
Furthermore, since d d A l ( ( W l ) 1 / 2 e l 2 2 ) = W l d d A l ( y l X A l 2 2 ) , where W l = d i a g ( ω ( e 1 l ) , ω ( e 2 l ) , , ω ( e d l ) ) denotes the residual function, Equation (6) can be regarded as the derivative of 1 2 ( W l ) 1 / 2 e l 2 2 .
By using Equation (5), the residual function can be calculated as follows:
ω ( e k l ) = d ρ ( e k l ) d e k l 1 e k l = exp ( μ ( e k l ) 2 + μ δ ) 1 + exp ( μ ( e k l ) 2 + μ δ ) .
For the residual matrix W l , the following method is proposed to calculate it:
Step 1: Initiate W l , 1 = d i a g ( 1 , 1 , , 1 ) and calculate the collaborative code γ l of each testing sample by using the collaborative representation model
γ l = arg min γ l W l ( y l X γ l ) 2 2 + ξ γ l 2 2 , ( l = 1 , 2 , , s ) .
Step 2: Substitute the collaborative residual e k l = | y k l X k γ l | , ( k = 1 , , d ) into Equation (7) and obtain the residual matrix W l .
Step 3: If W l is not convergent, repeat step 1 and step 2, otherwise output W l .
With the residual matrix W l calculated, Equation (3) can be rewritten as follows
arg min A l 1 2 ( W l ) 1 / 2 e l 2 2 + λ ϕ ( A l ) , ( l = 1 , 2 , , s ) .
Due to the existence of parameter λ , omit the coefficient in front of the first term and the Equation (8) becomes:
arg min A l ( W l ) 1 / 2 e l 2 2 + λ ϕ ( A l ) , ( l = 1 , 2 , , s ) .
For the second term ϕ ( A l ) of the optimal objective function, SRC [25] adopted the L1 norm to realize the sparseness of linear representation coefficient. In general, an iterative algorithm is employed to solve the L1 norm regularization based sparse representation problem. There are many famous algorithms [50] to implement the iteration, such as L1 regularized least squares (L1LS), homotopy method, augmented Lagrangian method (ALM), orthogonal matching pursuit method (OMP) [51] and fast iterative shrinkage thresholding algorithm (FISTA), etc. However, these methods still suffer from the issue of low efficiency. To address this issue, Zhang et al. [52] introduced the collaborative representation-based classification (CRC) into the method and utilized the L2 regularization to obtain the representation coefficient. Although CRC provided an efficient algorithm, it failed to give full consideration to the sparseness of linear representation. Reference [49] employed L2 regularization to implement the face recognition by utilizing a discriminative sparse representation method. Inspired by this, the L2 regularization item is introduced into our model and a novel RL2SR model is proposed as follows:
arg min A l ( W l ) 1 / 2 e l 2 2 + λ i = 1 C j = 1 C X i A i l + X j A j l 2 2 ,   ( l = 1 , 2 , , s ) .
Since:
i = 1 C j = 1 C X i A i l + X j A j l 2 2 = 2 i = 1 C X i A i l 2 + 2 i = 1 C j = 1 C ( ( X i A i l ) T ( X j A j l ) ) ,   ( l = 1 , 2 , , s ) ,
Equation (11) can be separated into two parts. Minimizing ( X i A i l ) T ( X j A j l ) implies that the correlation between the i-th class and j-th class is also minimal with respect to the linear representation. This makes the linear approximation combination have the best discrimination ability. Thus, the second term of Equation (11) has the capability of decorrelating the linear representation combination with different classes. Correspondingly, minimization of the sum ( X i A i l ) T ( X j A j l ) , instead of any individual terms, can accomplish the decorrelation affection for different classes. In consequence, this approach can discriminate the testing sample to the really nearest class. Minimization of X i A i l 2 , ( i = 1 , 2 , , C ) means that the norm of the linear representation combination with each class is also small. Similar to the presented linear representation approaches, such as SRC and CRC, there is a competitive relationship between different classes of training samples. In other word, the testing sample can be denoted by the weight sum of the training samples from all of the classes. Obviously, that is a linear representation which means every class makes its impact to represent the testing sample. Competition in representation implies that when a class makes an important impact to the linear representation, the remainder classes make considerably less impact.
The objective function shown in Equation (10) can be rewritten as:
L ( A l ) = arg min A l ( W l ) 1 / 2 ( y l X A l ) 2 2 + λ i = 1 C j = 1 C X i A i l + X j A j l 2 2 , ( l = 1 , 2 , , s ) .
For the first term of objective function (12), using arg min A l ( W l ) 1 / 2 ( y l X A l ) 2 2 instead of y l = X A l implies that X A l is a linear approximation of the test image. That is to say, this model can tolerate considerable noise contamination. In the meantime, the residual function can measure the linear representation residual well and enhance the noise robustness of the proposed model. In order to optimize the presented model, we introduce the following theorem:
Theorem 1. 
The proposed RL2SR model (12) is convex and differentiable w.r.t. coefficient A l , and it has a closed form solution.
Proof. 
Firstly, the objective function (12) can be considered as a combination of two L2 regularization terms, i.e., ( W l ) 1 / 2 ( y l X A l ) 2 2 and i = 1 C j = 1 C X i A i l + X j A j l 2 2 . By adopting the properties of L2 norm, the convexity and derivative of the proposed model (12) can be easily proved.
Secondly, the derivative of function ( W l ) 1 / 2 ( y l X A l ) 2 2 can be computed as follows:
d d A l ( W l ) 1 / 2 ( y l X A l ) 2 2 = 2 X T W l ( y l X A l ) .
On the other hand, for the second term i = 1 C j = 1 C X i A i l + X j A j l 2 2 , since it does not contain the coefficient A l explicitly, we could not compute the derivative directly. To address this issue, we compute the partial derivatives of i = 1 C j = 1 C X i A i l + X j A j l 2 2 w.r.t. A k l ( k = 1 , 2 , , C ) . Denote φ ( A l ) = i = 1 C j = 1 C X i A i l + X j A j l 2 2 , we have:
φ A k l = A k l ( i = 1 C j = 1 C X i A i l + X j A j l 2 2 ) = A k l ( j = 1 j k C X k A k l + X j A j l 2 2 + i = 1 i k C X i A i l + X k A k l 2 2 + i = 1 i k C j = 1 j k C X i A i l + X j A j l 2 2 + X k A k l + X k A k l 2 2 ) = A k l ( 2 j = 1 j k C X k A k l + X j A j l 2 2 + i = 1 i k C j = 1 j k C X i A i l + X j A j l 2 2 + X k A k l + X k A k l 2 2 ) = 4 X k T ( ( C + 1 ) X k A k l + j = 1 j k C X j A j l ) ) = 4 X k T ( C X k A k l + j = 1 C X j A j l ) ) = 4 X k T ( C X k A k l + X A l )
Then, we can obtain the derivative as follows:
d φ d A l = ( φ A 1 l φ A C l ) = ( 4 X 1 T ( C X 1 A 1 l + X A l ) 4 X C T ( C X C A C l + X A l ) ) = 4 C ( X 1 T X 1 0 0 X C T X C ) A l + 4 X T X A l .
By denoting:
M = ( X 1 T X 1 0 0 X C T X C ) ,
we have:
d φ d A l = 4 ( C M + X T X ) A l .
As a consequence, the derivative of objective function (12) with respect to A l is:
d L d A l = 2 X T W l ( y l X A l ) + 4 λ ( C M A l + X T X A l ) , ( l = 1 , 2 , , s ) .
By employing the property of optimal solution, and setting is as zero, the closed solution of objective function (12) is obtained as follows:
A l = ( 2 λ C M + 2 λ X T X + X T W l X ) 1 X T W l y l , ( l = 1 , 2 , , s ) .
The proof of Theorem 1 is thus completed. □
The proposed RL2SR method is summarized in Table 1.

3.2. Image Fusion Based on Adaptive Weighted Method

In this section, a weighted sparse and collaborative concentration index is introduced to quantify the discrimination of each spectral testing sample and an adaptive weighted fusion method is proposed to construct the fused palmprint image.
Definition 1. 
[25] (sparse concentration index (SCI)) The SCI of a coefficient vector α R n is defined as:
S C I ( α ) = C max i δ i ( α ) 1 / α 1 1 C 1 ,
where C is the class number, δ i ( α ) is an indicator function defined on R n which keeps the coefficients affiliated to the i th class and sets all the other coefficients to be zero.
Obviously, S C I ( α ) = 1 implies that the training samples from a unitary class can express the testing sample well. On the contrary, S C I ( α ) = 0 means that all of the training samples have an average impact to represent the testing sample. Therefore, SCI can measure the sparseness of the linear representation coefficient and the discrimination ability of the testing sample efficiently. If S C I ( α ) = 1 , the testing sample has the strongest discrimination ability and it can be easily classified into the correct class. If S C I ( α ) = 0 , the testing sample has the weakest discrimination ability and we cannot determine the actual class that the testing sample should belong to.
The SCI uses the L1 norm to evaluate the sparseness of the linear representation coefficient and it can’t efficiently evaluate the coefficient obtained by our RL2SR method since the L2 norm regularization is utilized. It considers not only the sparseness, but also the collaborative representation information of the representation coefficient. To address this issue, the definition of SCI is extended and a weighted sparse and collaborative concentration index, namely WSCCI, is proposed to evaluate the representation coefficient obtained by our RL2SR model.
Definition 2. 
(weighted sparse and collaborative concentration index (WSCCI)) The WSCCI of a coefficient vector α R n is defined as:
W S C C I ( α ) = μ 1 ( C max i δ i ( α ) 1 / α 1 1 ) + μ 2 ( C max i δ i ( α ) 2 / α 2 1 ) ( μ 1 + μ 2 ) ( C 1 ) ,
where C denotes the class number, μ 1 and μ 2 are nonnegative parameters.
In WSCCI, the weighted fusion of the sparse and collaborative concentration index defined by the L1 norm and L2 norm is utilized to evaluate the discriminative performance of the given sample. As a consequence, it can be regarded as the weighted sum of SCI and CCI (i.e., collaborative concentration index). From the above analysis, the proposed WSCCI can be utilized to model our adaptive weighted fusion method.
The proposed adaptive weighted image fusion method can be summarized as follows:
(1) For the linear representation coefficients A l , ( l = 1 , 2 , , s ) obtained by Equation (13), calculate the W S C C I ( A l ) , ( l = 1 , 2 , , s ) by using Equation (15).
(2) Normalize W S C C I ( A l ) , ( l = 1 , 2 , , s ) by using:
η l = W S C C I ( A l ) W S C C I ( A 1 ) + W S C C I ( A 2 ) + + W S C C I ( A s ) = W S C C I ( A l ) i = 1 s W S C C I ( A i ) , ( l = 1 , 2 , , s ) .
(3) Reconstruct the fused multispectral palmprint image y by using:
y = X ( η 1 A 1 + η 2 A 2 + + η s A s ) = X l = 1 s η l A l .  
With the fused multispectral palmprint image obtained, TELM is proposed to implement the recognition task.

3.3. Principle of Tensor Based ELM

ELM can be considered as a generalized single hidden layer feedforward neural network (SLFN). Since ELM randomly chooses the initial values of the hidden nodes and analytically calculates the output weights, the learning speed is extremely fast compared to the conventional supervised learning algorithms (i.e., support vector machine (SVM) [53] and k-nearest neighbor (KNN) algorithm, etc.). In addition, its generalization ability is better than many back propagation neural networks algorithms. In consequence, ELM has been extensively studied and widely applied in lots of areas (such as pattern classification, clustering analysis and regression etc.) and plenty of research achievements have been acquired. Inspired by this idea, we present a novel TELM by extend the conventional ELM to the tensor space, and it can regard the image as a tensor to execute the recognition task.

3.3.1. ELM

Given a training set with N different training samples ( x j , t j ) R d × R m , ( j = 1 , 2 , , N ) , where x j = [ x j 1 , x j 2 , , x j d ] T R d denotes the j th training sample, t j = [ t j 1 , t j 2 , , t j m ] T R m represents the target of sample x j . A classical SLFNs can be theoretically defined by:
i = 1 L β i f i ( x j ) = i = 1 L β i f ( a i x j + b i ) = t j , j = 1 , 2 , , N .
In this model, the hidden node number is L and activation function is f ( x ) . a i = [ a i 1 , a i 2 , , a i d ] T denotes the input weight value which connects the input nodes with the i th hidden node. β i = [ β i 1 , β i 2 , , β i m ] T denotes the output weight value which connects the output nodes with the i th hidden node. b i denotes the bias for the i th hidden node. a i x j means a dot product between a i and x j . The classical SLFNs can approximate the given training samples set with the minimum residual.
Obviously, Equation (18) is a system of linear equations. By introducing the concept of matrix, we can rewrite it as follows:
H β = T ,
where:
H = ( f ( a 1 x 1 + b 1 ) f ( a L x 1 + b L ) f ( a 1 x N + b 1 ) f ( a L x N + b L ) ) N × L , β = ( β 1 T β L T ) L × m , T = ( t 1 T t N T ) N × m .
Theorem 2. 
For a given normative SLFNs which possesses L hidden nodes and an activation function f , where f : R R is an infinitely differentiable function on the definition interval. Given a training set with N different samples ( x j , t j ) , where x j R n denotes the sample data and t j R m represents the target of x j . For any randomly assigned weight a i and bias b i , the output matrix H of the hidden layer can be obtained by the pseudo-inverse and satisfies H β T = 0 for probability one with respect to any continuous probability distribution.
For the proof of the Theorem 2 readers can refer to [45]. Based on this theory, ELM can be descripted as follows: With the initial weight vector and the biases of hidden layer nodes determined by random assignment, we can obtain the output matrix H for the hidden layer based on the input samples. Therefore, we can transform the training procedure of ELM to a classical least squares problem of linear equations, i.e.,
min β H β T 2 .
We can obtain the least square solution of Equation (20) as follows:
β ^ = H + T .
where H + refers to the Moore-Penrose pseudo-inverse for matrix H .

3.3.2. Tensor Based ELM

Although the conventional ELM can deal well with one-dimensional signals, for two-dimensional images, it needs to be vectorized and solved in the one-dimensional space. However, in this transformation it is easy to lose the spatial structure information of the image. In order to solve this problem, we extend the conventional ELM to the tensor space and put forward a novel tensor- based ELM to deal with the high-dimensional signals.
In view of the high-dimensional characteristics of the palmprint image, we regard the fused image as a second-order tensor and classify it by the proposed TELM. In our method, the high order singular value decomposition (HOSVD) algorithm [54] is utilized to decompose the fused palmprint image and construct the input weight values of the TELM model.
Given an M order tensor F R I 1 × I 2 × × I M and a matrix U R J m × I m , we define B R I 1 × × I m 1 × J m × I m + 1 × × I M as the m th modal product of F and U , the elements of B can be calculated by:
( F × U m ) i 1 i m 1 j m i m + 1 i M = i n f i 1 i m 1 i m i m + 1 i M u j m i m .  
so the m-th modal tensor product can be simply denoted by:
B = F × U m .
The HOSVD algorithm can be implemented by using the tensor product. Given an M order tensor F R I 1 × I 2 × × I M , we can use the tensor product to decompose F in the following:
F = S × U ( 1 ) × U ( 2 ) × × U ( M ) ,
where S denotes an M order tensor which is called as a core tensor, U ( 1 ) R I 1 × I 1 , U ( 2 ) R I 2 × I 2 , , U ( M ) R I M × I M are unitary matrices and each column is corresponding to the orthogonal basis of unfolded matrices F ( 1 ) , F ( 2 ) , , F ( M ) .
The low rank approximation of tensor F can be calculated by HOSVD, i.e.,
F S ( q 1 , q 2 , , q M ) × U q 1 ( 1 ) × U q 2 ( 2 ) × × U q M ( M ) ,
where S R q 1 × q 2 × × q M represents the principal component core tensor, U q i ( i ) represents the truncation matrix composed by the first q i columns of U ( i ) , i = 1 , 2 , , M .
According to the above discussion, we summarize the detailed process of the tensor based ELM as follows: let G i R s × t , ( i = 1 , 2 , , N ) be the i th fused training palmprint image, t i R m , ( i = 1 , 2 , , N ) be the target of sample G i . Denoted the training sample set as G R s × t × N . Then the HOSVD algorithm utilized to decompose G i R s × t , ( i = 1 , 2 , , N ) can be formulated as:
G i S ( L 1 , L 2 ) × U L 1 × V L 2 ,
where U L 1 = [ u 1 , u 2 , , u L 1 ] and V L 2 = [ v 1 , v 2 , , v L 2 ] represent the truncation matrix with L 1 and L 2 columns, respectively. Then TELM can be defined as:
l 1 = 1 L 1 l 2 = 1 L 2 g ( G j × u l 1 × v l 2 ) β l 2 + ( l 1 1 ) L 1 = t j , ( j = 1 , 2 , , N ) ,
where L 1 and L 2 denote the hidden layer node numbers along the tensor directions. In consequence, there are in total L = L 1 × L 2 hidden layer nodes. u l 1 and v l 2 denote the input weight vectors of the hidden layer along the tensor directions, respectively. β l 2 + ( l 1 1 ) L 1 denotes the weight value between the output nodes and the ( l 2 + ( l 1 1 ) L 1 ) th node in the hidden layer. Similar to ELM algorithm, g ( ) denotes the activation function. Finally, the output weight β can be obtained from Equation (27) by utilizing the least squares method.

4. Experiments

In this section, we evaluate the presented multispectral palmprint recognition algorithm on the benchmark available database offered by PolyU. Extensive experiments are implemented to demonstrate the effectiveness of the presented RL2SR method, adaptive fusion strategy and TELM. In the experiment of this paper, we use the fused palmprint image as the input of TELM classifier. In this section, we accomplish the experiments on a PC equipped with Windows 7, Intel Core i5-2320 CPU (3.0 GHz), and 6 GB RAM, and the algorithm is programmed using MATLAB 2017a.

4.1. The PolyU Multispectral Palmprint Database

The PolyU multispectral palmprint database was taken from 250 persons where the males are 195 and females are 55. The age of volunteers was mainly between 20 and 60 years old. In order to embody the differences of the acquired palmprint and make the palmprint images be various, the palmprint images were acquired in two separate phases. The time interval between the two phases was 5–15 days and each phase lasted about 9 days. In each phase, both hands of the volunteers were acquired six times respectively under the condition of four different spectra: Blue (470 nm), Green (525 nm), Red (660 nm) and NIR (880 nm). For each spectrum, 500 different palmprints were acquired from the 250 volunteers in the two phases. Therefore, the database contains 6000 palmprint images under each spectrum. That is, the multispectral palmprint database contains 6000 × 4 = 24,000 images in total. Reference [46] provided the ROI extraction process from the acquired multispectral palmprint images and established the database namely PolyU multispectral palmprint database (see Figure 4).
Figure 5 illustrates some images in the multispectral palmprint database. The images in the rows 1–4 are acquired under the Blue, Green, Red and NIR spectra, respectively. Every column is from the same class. In practice, the acquirement process is easily contaminated by various noises. To simulate this, the white Gaussian noise and salt & pepper noise are added into the images and the recognition experiments are implemented, respectively.
Figure 6 displays some multispectral palmprint images contaminated by different noises. Figure 6a shows the images contaminated by white Gaussian noise. Here, the mean is 0 and the standard deviation is 25. Meanwhile, Figure 6b shows the images contaminated by 50% salt & pepper noise. Rows 1–4 of Figure 6 exhibit the noisy palmprint images under the Blue, Green, Red and NIR spectra, respectively.

4.2. Parameter Selection

4.2.1. Selection of μ and δ for Residual Function

Now, let’s discuss the selection of parameters μ and δ for the residual function in Equation (7). It can be seen from Equation (7) that ω ( e k l ) exp ( μ δ ) / ( 1 + exp ( μ δ ) ) when e k l 0 . Similarly, when e k l , ω ( e k l ) = exp ( μ δ ) / ( exp ( μ ( e k l ) 2 ) + exp ( μ δ ) ) 0 . In order to make ω belong to (0, 1), set the product μ δ to be large enough, then ω ( e k l ) exp ( μ δ ) / exp ( μ δ ) = 1 . For simplicity, we denote T = μ δ . Since e 7 > 1000 , in order to meet ω ( e k l ) 1 when e k l 0 , set T = μ δ > 7 . From Equation (7), ω ( e k l ) = 1 / 2 , when δ = ( e k l ) 2 , so the parameter δ determines the boundary point position of the residual function value. That is to say, δ is determined when the weight ω will pass through 0.5. For the sake of enhancing the robustness of the model for the outlier or noise contamination efficiently, a novel method of selecting the parameter δ is presented as follows. Firstly, vectorize the square of the error ( e k l ) 2 and denote it as e ¯ l = [ ( e 1 l ) 2 , ( e 2 l ) 2 , , ( e d l ) 2 ] T , then arrange this vector’s elements in descending order and denote the new vector by e ˜ l . By denoting its maximum element as M and the minimum element as m , set τ 1 = ( 1 θ ) m + θ M , where θ is a constant and θ [ 0.6 , 0.8 ] . Since the dimension of e ˜ l is d , suppose that s is the nearest integer to θ d and the s th biggest element of e ˜ l is selected as τ 2 . Finally, let δ = ( τ 1 + τ 2 ) / 2 . Once δ is selected, parameter μ can be calculated by μ = T / δ . In our experiments, select the constant T = 8.

4.2.2. Selection of the Hidden Node Numbers Along the Directions of TELM

To evaluate the effect of the hidden node numbers along the directions of TELM, the experiments are implemented by setting the hidden node numbers varying from 1 to 20 under the cases of noise-free and different noise contaminations. The recognition performance is illustrated in Figure 7, Figure 8 and Figure 9. At the same time, Figure 7 and Figure 8 illustrate that our algorithm could converge rapidly with the increase of hidden node numbers. Obviously, when the hidden node numbers are both greater than 7, our algorithm achieves a perfect performance. From Figure 9, although the convergence performance is inferior to the noise-free case, our algorithm can still obtain better convergence speed. As a consequence, the appropriate hidden node numbers can be selected according to the above analysis. For simplicity, the hidden node numbers in our experiments are set as L 1 = L 2 = 10 .

4.3. Experiment Results and Analysis

In this subsection, the experiments are implemented to validate the efficiency of our presented algorithm from the aspects of sparse representation, fusion strategy, classification approach and the overall algorithm. For the sake of demonstrating the robustness of the presented RL2SR model, we accomplish the experiments compared with several different models, such as SRC, CRC and DSRM. The recognition rates are shown in Table 2.
From Table 2, it is easy to discover that each algorithm achieves the highest and the lowest recognition rates under the cases of noise-free and salt & pepper noise contamination, respectively. Since our proposed adaptive weighted fusion process approximates a spatial smoothing filtering, the decrease of recognition rate under the white Gaussian noise contamination is not obvious. Furthermore, by using our RL2SR coefficient for fusion, the recognition rates achieve 99.68%, 99.20% and 97.24%, which are 1.72%, 2.52 and 2.96% higher than DSRM under the cases of noise free, white Gaussian noise and SRC in the case of salt & pepper noise contamination, respectively. This indicates that our RL2SR is robust to different noises, which can improve the discriminant competency and increase the recognition rate of the fusion image.
To evaluate the efficiency of the presented adaptive fusion strategy, some comparison fusion experiments (i.e., the sum and min-max fusion strategy) are simulated and the recognition performance is listed in Table 3. In this experiment, the training sample number of each class varies from 2 to 4.
Table 3 illustrates that the recognition accuracies under different fusion strategies increases with the training sample number. In particularly, our presented fusion strategy achieves the highest recognition accuracy of 100%, 99.95% and 99.05% when we set the number of training samples as 4. Even when the training sample number declines to 2, our approach achieves an accuracy of 92.27% which is 19.74% higher than the min-max fusion strategy in the case of salt & pepper noise contamination (72.53%). This implies that our fusion strategy has the strongest robustness compared with the sum and min-max fusion methods.
To demonstrate the classification efficiency of the presented TELM, we accomplish the experiments compared with some other classifiers, such as NN, KNN, ELM, MPELM and RELM. For these comparison classifiers, we vectorize the fused image and take this vector as the input. For each classifier, 3–6 training samples are selected to complete the recognition experiments and the classification accuracy curves are plotted in Figure 10.
The curves in Figure 10 indicate that when the training sample number is greater than or equal to 4, the recognition rates of all the algorithms achieve excellent performance. The experimental results also show that, in the case of noise-free, the recognition rate of our proposed TELM algorithm gradually increases with the number of training samples. On the other hand, our TELM achieves higher recognition rates than the other algorithms. Although the improvement is not significant because the recognition rate is much approximate or even reaches to 100%. From the above analysis, it is easy to observe that the presented TELM algorithm can achieve efficient recognition performance and has strong stability compared with the other classifiers. Furthermore, more simulation experiments are implemented with the multispectral palmprint database when it is contaminated by the aforementioned noise. The recognition performances are illustrated in Table 4.
It is observed from Table 4 that, in the case of white Gaussian noise contamination, the recognition rate of TELM outperforms the other classifiers. Meanwhile, the recognition accuracy of the presented TELM is remarkably higher than the other methods under the case of salt & pepper noise contamination. In consideration of the pulse characteristic of the salt & pepper noise, it impacts remarkably on the distance measurement between different samples. When the testing samples are contaminated by salt & pepper noise, the recognition accuracy of KNN method achieves 38.92%, which is significantly lower than our TELM algorithms (97.24%). Since the proposed TELM abandons the eigenvectors corresponding to the smaller eigenvalues which have the higher correlation to the noise contamination, and retains the principal components corresponding to the major eigenvalues, TELM has the ability of noise reduction and the better discrimination ability. The experimental results in Table 4 also validate that our algorithm can achieve the higher recognition rate and possess the stronger robustness to noise contamination compared with the other classifiers.
To further validate the robustness of the proposed TELM algorithm, we add different degrees of salt & pepper noise to the testing sample and implement the recognition experiment. Figure 11 shows some noisy multispectral palmprint images contaminated by salt & pepper noise with 10% to 80% percentages. Figure 11a is the original images under different spectra. Figure 11b–i are the noisy contaminated images under different spectra when the degree of salt & pepper noise varies from 10% to 80%.
Figure 12 illustrates the recognition rate curves of our TELM algorithm and some of the aforementioned comparison classifiers. It is easy to find that the recognition rate curves of ELM MPELM, RELM and our algorithm drop significantly when the percentage of noise contamination is greater than 60%. Particularly, the recognition rate curves of NN and KNN methods are obviously lower than the other algorithms when the palmprint image is contaminated by more than 20% salt & pepper noise. That is to say, the accuracy curves of NN and KNN have the fast decline. The experiment result curves mean that our proposed TELM algorithm outperforms the comparison classifiers with different percentages of noise contamination and possesses stronger robustness.
Table 5 illustrates the average classification times of the aforementioned classifiers on the whole database. Although our TELM classifier is slower than the ELM method, the difference (i.e., 0.08 s) is very small. Moreover, it is distinctly faster than NN, KNN, MPELM and RELM classifiers. Especially, the classification time of NN is about five times that of our TELM. In additional, the above experiment results demonstrate that the recognition performance of our classifier significantly exceeds the NN, KNN, ELM, MPELM and RELM classifiers. This validates the recognition ability and efficiency of our algorithm.
Table 6 lists the recognition rates of our RL2SR-TELM algorithm with different spectral combinations. This experiment is implemented under the cases of noise-free, white Gaussian noise and 50% salt & pepper noise contamination and the training sample number per class is 4.
Table 6 summarizes the excellent performance of our presented algorithm in the cases of noise-free and white Gaussian noise contamination. In the noise-free case, the recognition accuracy achieves 100% for most of the spectral combinations. Even when the sample is contaminated by white Gaussian noise, our algorithm achieves the accuracy of more than 99.50% for all of the spectral combinations and 99.95% under the combination of Blue, Green, Red and NIR spectra. When the testing sample is contaminated by salt & pepper noise, the recognition rate declines significantly and achieves the lowest recognition rate 76.75% under the NIR spectrum. At the meantime, our RL2SR-TELM algorithm achieves an recognition performance under the combination of Blue, Green, Red and NIR spectra in the noise-free , white Gaussian noise and salt & pepper noise contamination cases, i.e., 100%, 99.95% and 99.05%, respectively. This indicates that our proposed RL2SR-TELM algorithm has excellent robustness to noise pollution.
Table 7 illustrates the recognition rates of our RL2SR-TELM algorithm compared with some state-of-the art palmprint recognition methods, such as deep scattering network method [18], texture feature-based method [42], and DCT-based features method [43] etc. It is easy to find that in the case of different training samples, our algorithm achieves an excellent recognition performance. Although the recognition accuracy of our algorithm is 0.32% lower than the deep scattering network method when the training sample number is three, and it is higher than the texture feature based method and DCT-based features method when the training sample number is four. Particularly, the recognition rate of our proposed algorithm reaches 100% when the number of training samples is greater than four.
Table 8 lists the recognition rates of our RL2SR-TELM algorithm comparing with some state-of-the-art multispectral palmprint recognition algorithms, such as matching score-level fusion by LOC method, DST-MPELM method, AE-RELM method, quaternion PCA using quaternion DWT method and image-level fusion by DWT method. In this experiment, we choose three samples per class to constitute the training set. The experimental results in Table 8 illustrate that our proposed algorithm can achieve an excellent recognition accuracy in the cases of both the noise-free (99.68%) and various noise contaminations (99.20% and 97.24%, respectively).
When the sample is contaminated by salt & pepper noise, the presented algorithm has more obvious advantages, which an accuracy that is respectively 0.76%, 7.26%, 1.48%, 7.08% and 14.49% higher than that of the other comparison algorithms. Table 9 demonstrates the time cost of our proposed RL2SR-TELM multispectral recognition algorithm for each test sample. It is easy to find that our RL2SR-TELM algorithm takes about 0.10945 s for a test sample recognition task.
To further demonstrate the performance of our presented RL2SR-TELM method, in the case of salt & pepper noise contamination, we plot the cumulative match characteristic (CMC) curves generated by our RL2SR-TELM method and the aforementioned comparison methods. Figure 13 shows the CMC curves.
From Figure 13, it is easy to find that our presented RL2SR-TELM method has the highest rank-1 recognition accuracy. Meanwhile, the cumulative match characteristic curve of our algorithm is mostly close to the upper left corner of the coordinate system comparing with the comparison multispectral palmprint recognition approaches which means that it has the rapidest convergence speed. This implies that our algorithm outperforms the others in recognition accuracy and noise robustness, and it is quite consistent with the aforementioned experiment results and analysis.

5. Conclusions

In this paper, a novel RL2SR-TELM algorithm is presented to implement multispectral palmprint recognition. Since the L2 regularization term is employed, the regularization optimal objective function is convex and a closed solution can be efficiently obtained. In addition, a new measurement, namely WSCCI, and an adaptive fusion framework are proposed to construct the fused multispectral palmprint images. For the classification task, we extend the conventional extreme leaning machine to the tensor domain and present a TELM algorithm. It deals with the palmprint image in two-dimensional space directly and makes the best use of its spatial structure to enhance the classification ability. Extensive experiments on PolyU multispectral palmprint database confirm the strong robustness, excellent recognition accuracy and high efficiency of our proposed algorithm.

Author Contributions

Conceptualization, D.C. and X.Z.; Methodology, D.C.; Software, D.C.; Validation, D.C., X.Z. and X.X.; Formal Analysis, D.C.; Writing-Original Draft Preparation, D.C.; Writing-Review & Editing, D.C.; Supervision, X.Z.; Project Administration, X.Z.; Funding Acquisition, X.Z. and X.X.

Funding

This research was funded by National Natural Science Foundation (No. 61673316), Major Science and Technology Project of Guangdong Province (No. 2015B010104002).

Acknowledgements

The authors would like to thank the anonymous reviewers and academic editor for all the suggestions and comments and MDPI Branch Office in China for improving this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cui, J.R. 2D and 3D Palmprint fusion and recognition using PCA plus TPTSR method. Neural Comput. Appl. 2014, 24, 497–502. [Google Scholar] [CrossRef]
  2. Lu, G.M.; Zhang, D.; Wang, K.Q. Palmprint recognition using eigenpalms features. Pattern Recogn. Lett. 2003, 24, 1463–1467. [Google Scholar] [CrossRef]
  3. Bai, X.F.; Gao, N.; Zhang, Z.H.; Zhang, D. 3D palmprint identification combining blocked ST and PCA. Pattern Recogn. Lett. 2017, 100, 89–95. [Google Scholar] [CrossRef]
  4. Zuo, W.M.; Zhang, H.Z.; Zhang, D.; Wang, K.Q. Post-processed LDA for face and palmprint recognition: What is the rationale. Signal Process. 2010, 90, 2344–2352. [Google Scholar] [CrossRef]
  5. Rida, I.; Herault, R.; Marcialis, G.L.; Gasso, G. Palmprint recognition with an efficient data driven ensemble classifier. Pattern Recogn. Lett. In press. [CrossRef]
  6. Rida, I.; Al Maadeed, S.; Jiang, X.; Lunke, F.; Bensrhair, A. An ensemble learning method based on random subspace sampling for palmprint identification. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2047–2051. [Google Scholar]
  7. Shang, L.; Huang, D.S.; Du, J.X.; Zheng, C.H. Palmprint recognition using FastICA algorithm and radial basis probabilistic neural network. Neurocomputing 2006, 69, 1782–1786. [Google Scholar] [CrossRef]
  8. Pan, X.; Ruan, Q.Q. Palmprint recognition using Gabor feature-based (2D)2PCA. Neurocomputing 2008, 71, 3032–3036. [Google Scholar] [CrossRef]
  9. Ekinci, M.; Aykut, M. Gabor-based kernel PCA for palmprint recognition. Electron. Lett. 2007, 43, 1077–1079. [Google Scholar] [CrossRef]
  10. Ekinci, M.; Aykut, M. Palmprint recognition by applying wavelet-based kernel PCA. J. Comput. Sci. Technol. 2008, 23, 851–861. [Google Scholar] [CrossRef]
  11. Fei, L.; Zhang, B.; Xu, Y.; Yan, L.P. Palmprint recognition using neighboring direction indicator. IEEE Trans. Hum. Mach. Syst. 2016, 46, 787–798. [Google Scholar] [CrossRef]
  12. Zheng, Q.; Kumar, A.; Pan, G. A 3D feature descriptor recovered from a single 2D palmprint image. IEEE Trans. Pattern Anal. 2016, 38, 1272–1279. [Google Scholar] [CrossRef] [PubMed]
  13. Younesi, A.; Amirani, M.C. Gabor filter and texture based features for palmprint recognition. Procedia Comput. Sci. 2017, 108, 2488–2495. [Google Scholar] [CrossRef]
  14. Fei, L.K.; Xu, Y.; Tang, W.L.; Zhang, D. Double-orientation code and nonlinear matching scheme for palmprint recognition. Pattern Recogn. 2016, 49, 89–101. [Google Scholar] [CrossRef]
  15. Gumaei, A.; Sammouda, R.; Al-Salman, A.M.; Alsanad, A. An effective palmprint recognition approach for visible and multispectral sensor images. Sensors 2018, 18, 1575. [Google Scholar] [CrossRef]
  16. Tabejamaat, M.; Mousavi, A. Concavity-orientation coding for palmprint recognition. Multimed. Tools Appl. 2017, 76, 9387–9403. [Google Scholar] [CrossRef]
  17. Chen, H.P. An efficient palmprint recognition method based on block dominant orientation code. Optik 2015, 126, 2869–2875. [Google Scholar] [CrossRef]
  18. Minaee, S.; Wang, Y. Palmprint recognition using deep scattering convolutional network. In Proceedings of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS), Baltimore, MD, USA, 28–31 May 2017; pp. 1–4. [Google Scholar]
  19. Tamrakar, D.; Khanna, P. Kernel discriminant analysis of block-wise Gaussian derivative phase pattern histogram for palmprint recognition. J. Vis. Commun. Image Represent. 2016, 40, 432–448. [Google Scholar] [CrossRef]
  20. Li, G.; Kim, J. Palmprint recognition with local micro-structure tetra pattern. Pattern Recogn. 2017, 61, 29–46. [Google Scholar] [CrossRef]
  21. Luo, Y.T.; Zhao, L.Y.; Zhang, B.; Jia, W.; Xue, F.; Lu, J.T.; Zhu, Y.H.; Xu, B.Q. Local line directional pattern for palmprint recognition. Pattern Recogn. 2016, 50, 26–44. [Google Scholar] [CrossRef]
  22. Jia, W.; Hu, R.X.; Lei, Y.K.; Zhao, Y.; Gui, J. Histogram of oriented lines for palmprint recognition. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 385–395. [Google Scholar] [CrossRef]
  23. Zhang, S.W.; Wang, H.X.; Huang, W.Z.; Zhang, C.L. Combining modified LBP and weighted SRC for palmprint recognition. Signal Image Video Process. 2018, 12, 1035–1042. [Google Scholar] [CrossRef]
  24. Guo, X.M.; Zhou, W.D.; Zhang, Y.L. Collaborative representation with HM-LBP features for palmprint recognition. Mach. Vis. Appl. 2017, 28, 283–291. [Google Scholar] [CrossRef]
  25. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  26. Maadeed, S.A.; Jiang, X.D.; Rida, I.; Bouridane, A. Palmprint identification using sparse and dense hybrid representation. Multimed. Tools Appl. 2018, 1–15. [Google Scholar] [CrossRef]
  27. Tabejamaat, M.; Mousavi, A. Manifold sparsity preserving projection for face and palmprint recognition. Multimed. Tools Appl. 2017, 77, 12233–12258. [Google Scholar] [CrossRef]
  28. Zuo, W.M.; Lin, Z.C.; Guo, Z.H.; Zhang, D. The multiscale competitive code via sparse representation for palmprint verification. In Proceedings of the 2010 International IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 2265–2272. [Google Scholar]
  29. Xu, Y.; Fan, Z.Z.; Qiu, M.N.; Zhang, D.; Yang, J.Y. A sparse representation method of bimodal biometrics and palmprint recognition experiments. Neurocomputing 2013, 103, 164–171. [Google Scholar] [CrossRef]
  30. Rida, I.; Al Maadeed, N.; Al Maadeed, S. A novel efficient classwise sparse and collaborative representation for holistic palmprint recognition. In Proceedings of the 2018 IEEE NASA/ESA Conference on Adaptive Hardware and Systems (AHS), Edinburgh, UK, 6–9 August 2018; pp. 156–161. [Google Scholar]
  31. Rida, I.; Maadeed, S.A.; Mahmood, A.; Bouridane, A.; Bakshi, S. Palmprint identification using an ensemble of sparse representations. IEEE Access 2018, 6, 3241–3248. [Google Scholar] [CrossRef]
  32. Han, D.; Guo, Z.H.; Zhang, D. Multispectral palmprint recognition using wavelet-based image fusion. In Proceedings of the IEEE International Conference on Signal Processing (ICSP), Beijing, China, 26–29 October 2008; pp. 2074–2077. [Google Scholar]
  33. Aberni, Y.; Boubchir, L.; Daachi, B. Multispectral palmprint recognition: A state-of-the-art review. In Proceedings of the IEEE International Conference on Telecommunications and Signal Processing, Barcelona, Spain, 5–7 July 2017; pp. 793–797. [Google Scholar]
  34. Bounneche, M.D.; Boubchir, L.; Bouridane, A.; Nekhoul, B.; Cherif, A.A. Multi-spectral palmprint recognition based on oriented multiscale log-Gabor filters. Neurocomputing 2016, 205, 274–286. [Google Scholar] [CrossRef] [Green Version]
  35. Hong, D.F.; Liu, W.Q.; Su, J.; Pan, Z.K.; Wang, G.D. A novel hierarchical approach for multispectral palmprint recognition. Neurocomputing 2015, 151, 511–521. [Google Scholar] [CrossRef]
  36. Raghavendra, R.; Busch, C. Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition. Pattern Recogn. 2014, 47, 2205–2221. [Google Scholar] [CrossRef]
  37. Xu, X.P.; Guo, Z.H.; Song, C.J.; Li, Y.F. Multispectral palmprint recognition using a quaternion matrix. Sensors 2012, 12, 4633–4647. [Google Scholar] [CrossRef] [PubMed]
  38. Gumaei, A.; Sammouda, R.; Al-Salman, A.M.; Alsanad, A. An improved multispectral palmprint recognition system using autoencoder with regularized extreme learning machine. Comput. Intell. Neurosci. 2018, 2018, 8041069. [Google Scholar] [CrossRef] [PubMed]
  39. Xu, X.B.; Lu, L.B.; Zhang, X.M.; Lu, H.M.; Deng, W.Y. Multispectral palmprint recognition using multiclass projection extreme learning machine and digital shearlet transform. Neural Comput. Appl. 2016, 27, 143–153. [Google Scholar] [CrossRef]
  40. El-Tarhouni, W.; Boubchir, L.; Elbendak, M.; Bouridane, A. Multispectral palmprint recognition using Pascal coefficients-based LBP and PHOG descriptors with random sampling. Neural Comput. Appl. 2017, 1–11. [Google Scholar] [CrossRef]
  41. Zhang, D.; Guo, Z.H.; Lu, G.M.; Zhang, L.; Zuo, W.M. An online system of multispectral palmprint verification. IEEE Trans. Instrum. Meas. 2010, 59, 480–490. [Google Scholar] [CrossRef]
  42. Minaee, S.; Abdolrashidi, A.A. Multispectral palmprint recognition using textural features. In Proceedings of the 2014 IEEE Signal Processing in Medicine and Biology Symposium (SPMB), Philadelphia, PA, USA, 13 December 2014; pp. 1–5. [Google Scholar]
  43. Minaee, S.; Abdolrashidi, A.A. On the power of joint wavelet-DCT features for multispectral palmprint recognition. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 1593–1597. [Google Scholar]
  44. Li, C.; Benezeth, Y.; Nakamura, K.; Gomez, R.; Yang, F. A robust multispectral palmprint matching algorithm and its evaluation for FPGA applications. J. Syst. Archit. 2018, 88, 43–53. [Google Scholar] [CrossRef]
  45. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef] [Green Version]
  46. Zhang, D.; Kong, W.K.; You, J.; Wong, M. Online palmprint identification. IEEE Trans. Pattern Anal. 2003, 25, 1041–1050. [Google Scholar] [CrossRef] [Green Version]
  47. Donoho, D. For most large underdetermined systems of linear equations the minimal 𝓁1-norm solution is also the sparsest solution. Commun. Pur. Appl. Math. 2006, 59, 797–829. [Google Scholar] [CrossRef]
  48. Yang, M.; Zhang, L.; Yang, J.; Zhang, D. Robust sparse coding for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 625–632. [Google Scholar]
  49. Xu, Y.; Zhong, Z.F.; Jang, J.; You, J.; Zhang, D. A new discriminative sparse representation method for robust face recognition via 𝓁 2 regularization. IEEE Trans. Neural Netw. Learn Syst. 2017, 28, 2233–2242. [Google Scholar] [CrossRef]
  50. l1_ls: Simple MATLAB Solver for l1-Regularized Least Squares Problems. Available online: http://web.stanford.edu/~boyd/l1_ls/ (accessed on 15 May 2008).
  51. Yang, A.Y.; Zhou, Z.H.; Balasubramanian, A.G.; Sastry, S.S.; Ma, Y. Fast 𝓁1-minimization algorithms for robust face recognition. IEEE Trans. Image Process. 2013, 22, 3234–3246. [Google Scholar] [CrossRef] [PubMed]
  52. Zhang, L.; Yang, M.; Feng, X.C. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
  53. Cortes, C. Support vector network. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  54. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
Figure 1. Principle of the multispectral palmprint acquisition device.
Figure 1. Principle of the multispectral palmprint acquisition device.
Sensors 19 00235 g001
Figure 2. Palmprint images acquired with different spectrums.
Figure 2. Palmprint images acquired with different spectrums.
Sensors 19 00235 g002
Figure 3. Flowchart of the proposed RL2SR-TELM algorithm.
Figure 3. Flowchart of the proposed RL2SR-TELM algorithm.
Sensors 19 00235 g003
Figure 4. ROI extraction.
Figure 4. ROI extraction.
Sensors 19 00235 g004
Figure 5. Some multispectral palmprint images of PolyU database.
Figure 5. Some multispectral palmprint images of PolyU database.
Sensors 19 00235 g005
Figure 6. Some multispectral palmprint images of PolyU database contaminated by different noises. (a) Palmprint images contaminated by white Gaussian noise. (b) Palmprint images contaminated by salt & pepper noise.
Figure 6. Some multispectral palmprint images of PolyU database contaminated by different noises. (a) Palmprint images contaminated by white Gaussian noise. (b) Palmprint images contaminated by salt & pepper noise.
Sensors 19 00235 g006
Figure 7. Recognition rates for RL2SR-TELM algorithm when the hidden node numbers of TELM vary from 1 to 20 in the case of noise-free.
Figure 7. Recognition rates for RL2SR-TELM algorithm when the hidden node numbers of TELM vary from 1 to 20 in the case of noise-free.
Sensors 19 00235 g007
Figure 8. Recognition rates for RL2SR-TELM algorithm when the hidden node numbers of TELM vary from 1 to 20 in the case of white Gaussian noise contamination.
Figure 8. Recognition rates for RL2SR-TELM algorithm when the hidden node numbers of TELM vary from 1 to 20 in the case of white Gaussian noise contamination.
Sensors 19 00235 g008
Figure 9. Recognition rates for RL2SR-TELM algorithm when the hidden node numbers of TELM vary from 1 to 20 in the case of 50% salt & pepper noise contamination.
Figure 9. Recognition rates for RL2SR-TELM algorithm when the hidden node numbers of TELM vary from 1 to 20 in the case of 50% salt & pepper noise contamination.
Sensors 19 00235 g009
Figure 10. Recognition rate curves for different classifiers when the training sample number varies from 3 to 6 in the case of noise-free.
Figure 10. Recognition rate curves for different classifiers when the training sample number varies from 3 to 6 in the case of noise-free.
Sensors 19 00235 g010
Figure 11. Some multispectral palmprint images contaminated by different percentages of salt & pepper noise. (a) is the original images under Blue, Green, Red and NIR spectrums. (bi) are the images contaminated by 10–80% salt & pepper noise under different spectrums.
Figure 11. Some multispectral palmprint images contaminated by different percentages of salt & pepper noise. (a) is the original images under Blue, Green, Red and NIR spectrums. (bi) are the images contaminated by 10–80% salt & pepper noise under different spectrums.
Sensors 19 00235 g011
Figure 12. Recognition rates for different algorithms when the training sample number is 3 and the percentage of the salt & pepper noise contamination varies from 10% to 80%.
Figure 12. Recognition rates for different algorithms when the training sample number is 3 and the percentage of the salt & pepper noise contamination varies from 10% to 80%.
Sensors 19 00235 g012
Figure 13. Performance for different multispectral palmprint recognition algorithms in terms of cumulative match characteristic curves.
Figure 13. Performance for different multispectral palmprint recognition algorithms in terms of cumulative match characteristic curves.
Sensors 19 00235 g013
Table 1. Robust L2 sparse representation algorithm.
Table 1. Robust L2 sparse representation algorithm.
Input: testing sample y l , ( l = 1 , 2 , , s ) , training sample matrix X = [ X 1 , , X i , , X C ] , initiate the residual function matrix W l , 1 = d i a g ( 1 , 1 , , 1 ) .
Output: linear representation coefficient A l , ( l = 1 , 2 , , s ) .
While e r r o r not convergent, do
1. Calculate the collaborative representation code γ l by solving
γ l , t + 1 = arg min γ l W l , t ( y l X γ l ) 2 2 + ξ γ l 2 2 .

2. Calculate the residual by employing
e k l , t + 1 = | y k l X k γ l , t + 1 | , ( k = 1 , , d ) .

3. Calculate the residual function by using
ω ( e k l , t + 1 ) = exp ( μ ( e k l , t + 1 ) 2 + μ δ ) 1 + exp ( μ ( e k l , t + 1 ) 2 + μ δ ) , ( k = 1 , , d ) .

4. Update W l by utilizing
W l , t + 1 = d i a g ( ω ( e 1 l , t + 1 ) , ω ( e 2 l , t + 1 ) , , ω ( e d l , t + 1 ) ) .

5. Calculate e r r o r = W l , t + 1 W l , t F / W l , t F .
End while
6. For each spectral testing sample y l , ( l = 1 , 2 , , s ) , calculate A l , ( l = 1 , 2 , , s ) by using
A l = ( 2 λ C M + 2 λ X T X + X T W l X ) 1 X T W l y l .
Table 2. Recognition rates for different representation methods.
Table 2. Recognition rates for different representation methods.
Representation MethodRecognition Rate (%)
Noise-FreeWhite Gaussian NoiseSalt & Pepper Noise
SRC99.6497.8494.28
CRC99.4498.7696.68
DSRM97.9696.6896.28
RL2SR99.6899.2097.24
Table 3. Recognition rates for different fusion methods when the training sample number per class varies from 2 to 4 under the cases of noise-free and different noise contaminations.
Table 3. Recognition rates for different fusion methods when the training sample number per class varies from 2 to 4 under the cases of noise-free and different noise contaminations.
Fusion MethodNoise Contamination CaseRecognition Rate (%)
234
Sum fusionNoise-free97.5099.5699.90
White Gaussian noise96.7099.4499.65
Salt & pepper noise89.6396.5698.55
Min-max fusionNoise-free92.8397.6899.25
White Gaussian noise92.6797.4499.20
Salt & pepper noise72.5382.1685.85
Our adaptive fusionNoise-free97.7399.68100.00
White Gaussian noise97.4799.2099.95
Salt & pepper noise92.2797.2499.05
Table 4. Recognition rates for different classifiers under the cases of noise-free and noise contaminations.
Table 4. Recognition rates for different classifiers under the cases of noise-free and noise contaminations.
ClassifiersRecognition Rate (%)
Noise-FreeWhite Gaussian NoiseSalt & Pepper Noise
NN99.2496.4844.24
KNN97.1293.3238.92
ELM99.1899.1695.55
MPELM99.0098.8095.60
RELM99.4198.9696.07
TELM99.6899.2097.24
Table 5. Classification time for different classifiers.
Table 5. Classification time for different classifiers.
ClassifiersClassify Time (s)
NN7.76
KNN5.17
ELM1.51
MPELM1.82
RELM1.67
TELM1.59
Table 6. Recognition rates for our RL2SR-TELM with different spectral combinations under the cases of noise-free and different noise contaminations.
Table 6. Recognition rates for our RL2SR-TELM with different spectral combinations under the cases of noise-free and different noise contaminations.
Spectral CombinationRecognition Rate (%)
Noise-FreeWhite Gaussian NoiseSalt & Pepper Noise
Blue99.5598.6580.90
Green99.5099.2587.65
Red99.4599.1583.10
NIR98.6594.5076.75
Blue, Green100.0099.8095.80
Blue, Red99.9599.8093.30
Blue, NIR100.0099.8590.70
Green, Red99.7599.5096.15
Green, NIR100.0099.8095.80
Red, NIR99.9099.9096.60
Blue, Green, Red100.0099.8598.65
Blue, Green, NIR100.0099.9097.60
Blue, Red, NIR100.0099.9097.15
Green, Red, NIR99.9599.8598.35
Blue, Green, Red, NIR100.0099.9599.05
Table 7. Recognition rates for different multispectral palmprint recognition algorithms in the case of noise-free.
Table 7. Recognition rates for different multispectral palmprint recognition algorithms in the case of noise-free.
AlgorithmRecognition Rate (%) for Different Training Sample Number
3456
Deep scattering network method [18]100100100100
Texture feature based method [42]-99.9699.99100
DCT-based features method [43]-99.97100100
Our proposed RL2SR-TELM99.68100100100
Table 8. Recognition rates for our RL2SR-TELM and some other multispectral palmprint recognition algorithms.
Table 8. Recognition rates for our RL2SR-TELM and some other multispectral palmprint recognition algorithms.
AlgorithmRecognition Rate (%)
Noise-FreeWhite Gaussian NoiseSalt & Pepper Noise
Matching score-level fusion by LOC [41]99.4399.2396.48
DST-MPELM [39]99.4798.3089.98
AE-RELM [38]99.1698.4895.76
QPCA + QDWT [37]98.8393.3390.16
Image-level fusion by DWT [32]99.0396.2382.75
Our proposed RL2SR-TELM99.6899.2097.24
Table 9. Time cost of our RL2SR-TELM algorithm.
Table 9. Time cost of our RL2SR-TELM algorithm.
ProcedureRL2SR and Adaptive FusionTELMTotal Time
Average time (s)0.108920.000530.10945

Share and Cite

MDPI and ACS Style

Cheng, D.; Zhang, X.; Xu, X. An Improved Recognition Approach for Noisy Multispectral Palmprint by Robust L2 Sparse Representation with a Tensor-Based Extreme Learning Machine. Sensors 2019, 19, 235. https://doi.org/10.3390/s19020235

AMA Style

Cheng D, Zhang X, Xu X. An Improved Recognition Approach for Noisy Multispectral Palmprint by Robust L2 Sparse Representation with a Tensor-Based Extreme Learning Machine. Sensors. 2019; 19(2):235. https://doi.org/10.3390/s19020235

Chicago/Turabian Style

Cheng, Dongxu, Xinman Zhang, and Xuebin Xu. 2019. "An Improved Recognition Approach for Noisy Multispectral Palmprint by Robust L2 Sparse Representation with a Tensor-Based Extreme Learning Machine" Sensors 19, no. 2: 235. https://doi.org/10.3390/s19020235

APA Style

Cheng, D., Zhang, X., & Xu, X. (2019). An Improved Recognition Approach for Noisy Multispectral Palmprint by Robust L2 Sparse Representation with a Tensor-Based Extreme Learning Machine. Sensors, 19(2), 235. https://doi.org/10.3390/s19020235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop