Next Article in Journal
Design of an Intrinsically Safe Series-Series Compensation WPT System for Automotive LiDAR
Next Article in Special Issue
Recent Advances in Biometrics and Its Applications
Previous Article in Journal
Development of a High-Performance, FPGA-Based Virtual Anemometer for Model-Based MPPT of Wind Generators
Previous Article in Special Issue
Biometrics Using Electroencephalograms Stimulated by Personal Ultrasound and Multidimensional Nonlinear Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Face–Iris Multimodal Biometric Identification System

1
NDT Laboratory, Electronics Department, Jijel University, Jijel 18000, Algeria
2
LIASD Laboratory, Department of Computer Science, University of Paris 8, 93526 Saint-Denis, France
3
NDT Laboratory, Automatics Department, Jijel University, Jijel 18000, Algeria
4
LASA Laboratory, Badji Mokhtar-Annaba University, Annaba 23000, Algeria
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(1), 85; https://doi.org/10.3390/electronics9010085
Submission received: 28 October 2019 / Revised: 15 December 2019 / Accepted: 17 December 2019 / Published: 1 January 2020
(This article belongs to the Special Issue Recent Advances in Biometrics and its Applications)

Abstract

:
Multimodal biometrics technology has recently gained interest due to its capacity to overcome certain inherent limitations of the single biometric modalities and to improve the overall recognition rate. A common biometric recognition system consists of sensing, feature extraction, and matching modules. The robustness of the system depends much more on the reliability to extract relevant information from the single biometric traits. This paper proposes a new feature extraction technique for a multimodal biometric system using face–iris traits. The iris feature extraction is carried out using an efficient multi-resolution 2D Log-Gabor filter to capture textural information in different scales and orientations. On the other hand, the facial features are computed using the powerful method of singular spectrum analysis (SSA) in conjunction with the wavelet transform. SSA aims at expanding signals or images into interpretable and physically meaningful components. In this study, SSA is applied and combined with the normal inverse Gaussian (NIG) statistical features derived from wavelet transform. The fusion process of relevant features from the two modalities are combined at a hybrid fusion level. The evaluation process is performed on a chimeric database and consists of Olivetti research laboratory (ORL) and face recognition technology (FERET) for face and Chinese academy of science institute of automation (CASIA) v3.0 iris image database (CASIA V3) interval for iris. Experimental results show the robustness.

1. Introduction

The increasing demand for reliable and secure recognition systems now used in many fields is obvious evidence that more attention should be paid to biometrics. Biometric systems represent a means of accurate automatic personal recognition based on physiological characteristics (as fingerprint, iris, face, palm print), or behavioral characteristics (as gait, signature, and typing) that are unique and cannot be lost or forgotten [1]. Biometric recognition systems are used in many areas such as: passport verification, airports, buildings, mobile phones, and identity cards [2]. Unimodal biometric systems measure and analyze a single characteristic of the human body. These have many limitations, such as: (i) Noise in sensed data: where the recognition rate of a biometric system is very sensitive to the quality of the biometric sample. (ii) Non-universality: if each individual in a population is able to provide a biometric modality for a given system, this modality is said to be universal. However, not all biometric modalities are truly universal. (iii) Lack of individuality: features extracted from biometric modality of different individuals may be relatively identical [2]. (iv) Intra-class variation: the biometric information acquired during the training process of an individual for generating a template will not be identical to the template generated from biometric information for the same user during the test process. These variations may be due to poor interaction of the user with the sensor [3]. (v) Spoofing: although it seems difficult to steal a person’s biometric modalities, it is always possible to circumvent a biometric system using spoofed biometric modalities. To overcome these disadvantages, one solution is the use of several biometric modalities within the same system, which is then referred to as a multi-biometric system [3,4].
Effectively, multi-biometric system can be divided into four categories: multi-sensors, multi-samples, multi-algorithms, and multi-instances [5]. Combining information from multiple biometric sources is known as information fusion. This can be divided into three different levels of fusion [5,6]. In the sensor level, fusion occurs before the feature extraction module, and can be done only if various acquisitions are instances of the same biometric modality obtained from several compatible sensors. The feature level fusion consists of combining different feature vectors generated from different biometric modalities to create a single template or feature vector [3]. Only in the case of feature vectors that are compatible between each other or homogeneouscan they be concatenated into a single feature vector [6]. Match score level fusion is performed after the matcher module, which generates match scores between the test sample and the template stored in the database as a similarity or dissimilarity indicator for each modality. The fusion process occurs to combine the scores obtained by different matchers to generate a single matching score [5]. Rank level fusion consists of generating a rank of biometric identities sorted with all biometric modalities, and then fusing the ranking for each individual available for different biometric modalities. The lowest score obtained corresponds to the correct identity. In decision level fusion, each modality has gone through its biometric system (feature extraction, matching, and recognition), where each system is providing a binary decision. Decision level fusion aims to make a final decision by using different algorithms such as AND, OR, etc. [5,6].
A biometric system has two phases, enrolment and recognition. For the enrolment phase, biometric modality is captured and processed with specific algorithms, to obtain a reference biometric template for each user that is stored in the database. For the recognition phase, a biometric sample is captured and processed as in the previous phase, then compared with biometric templates stored in the database [7]. Generally, biometric systems can operate in two modes, which are the identification mode and the verification mode. For the identification mode, a biometric sample is captured and processed, then compared against all templates in the database (this mode known as a one-to-one comparison), and the identity of the template to which the person belongs is determined. For the verification mode, a biometric sample is captured and processed as in the enrolment phase, then it is compared to the corresponding template stored in the database. The obtained result is either accepted (if the user is authentic) or rejected (if the user is an impostor) [7].
Several multimodal biometric systems have been proposed using different modalities in recent years, including the following. In 1995, [8] Brunelli and Falavigna proposed a multimodal biometric system combining face and voice based on the theory of supervised learning and Bayes. In 1998, [9] Hong and Jain combined face and finger print at the matching scores level. In 2002, [10] Kittler and Messer combined voice and face using two trainable methods of classifier. In 2003, [11] Ross and Jain combined face, finger print and hand geometry at matching score level. In 2004, [12] Feng et al. combined face and palm print at feature level fusion. In 2005, [13] Jain et al. combined face, finger print and hand geometry at the score level. In 2006, [14] Li et al. combined palm print, hand shape and knuckle print at feature level fusion. In 2011, [15] Meraoumia et al. integrated two different modalities, palm print and finger knuckle print, at score level fusion. In 2013, [16] Eskandari and Toygar combined face and iris at feature level fusion. In 2017, [17] Elhoseny et al. investigated the fusion of finger print and iris in the identification process. In the same year (2017), [18] Hezil and Boukrouche combined ear and palm print at feature level. In 2018, [3] Kabir et al. proposed multi-biometric systems based on genuine-impostor score fusion. In 2019, [19] Walia et al. proposed a multimodal biometric system integrating three complementary biometric traits, namely, iris, finger vein, and finger print based on an optimal score level fusion model. In 2019, [20] Mansour et al. proposed multi-factor authentication based on multimodal biometrics (MFA-MB).
In this study, we choose face and iris patterns to construct a multimodal biometric system for the following reasons: Iris modality is the most reliable biometric characteristic; it is a protected organ and has a unique texture unchanged throughout the adult human life. The iris region is segmented from the eye image for the identification process. The face is the most natural way to recognize a person from its image [6]. Face recognition is friendly, non-invasive (meaning that it does not violate individual privacy), and its deployment cost is relatively low; a simple camera connected to a computer may be sufficient. However, facial recognition is still relatively sensitive to the surrounding environment to achieve a high recognition rate. On the other hand, the modality of the iris is certainly more intrusive, but it is currently considered as one of the most accurate biometrics. This choice of the combination of the two modalities is confirmed by the Zephyr analysis as shown in [6]. In addition, a sample capture device with a very high resolution would simultaneously analyze the texture of the iris and the face [21].
There are basically four components for a conventional biometric system: preprocessing, feature extraction, matching, and decision phase. The feature extraction method affects the performance of the system significantly; there are many feature extraction techniques described in [22]. This paper proposes a multimodal biometric system based on the face and iris, which uses a multi-resolution 2D Log-Gabor filter with spectral regression kernel discriminant analysis (SRKDA) to extract pertinent features from the iris. Furthermore, it proposes a new facial feature extraction technique, which is singular spectrum analysis (SSA) modeled by normal-inverse Gaussian distribution (NIG) model and statistical features (entropy, energy, and skewness) derived from wavelet transform. The classification process is performed using fuzzy k-nearest neighbor (FK-NN).
This paper is organized as follows: Section 2 reviews related works by introducing the well-known proposed multimodal biometric systems based on face and iris modalities. Section 3 describes the proposed multimodal biometric system. Section 4 presents the results of the experiment carried out to assess the performance of the proposed approach and Section 5 concludes the paper.

2. Related Works

The recognition rate of multimodal systems depends on multiple factors, such as: the fusion scheme, fusion technique, selected features and extraction techniques, the used modalities, and compatibility of the feature vectors of various used modalities. The following section presents a brief overview of state-of-the-art of face–iris multimodal biometric systems. Recent and important works are summarized in the Table 1.

3. Proposed Multimodal Biometric System

This paper proposes a multimodal biometric system based on face and iris modalities as shown in Figure 1. The proposed system is described and detailed in this section.

3.1. Pre-processing

The image pre-processing step aims to process the face and iris images in order to enhance their quality and also to extract the regions of interest (ROIs).
The face is considered the most important part of human body. It is enhanced by applying histogram equalization that usually increases the global contrast of the images. Then, the face image is cropped using the center positions of the left and right eyes, which are detected by Viola and Jones algorithm [34]. On the other hand, local regions of the face image (left and right iris, nose, mouth) are detected with the same algorithm. Figure 2 illustrates the pre-processing steps of face recognition.
John Daugman developed the first algorithms for iris recognition, publishing the first related papers and giving the first live demonstrations. This paper proposes an iris biometric system based on Daugman’s algorithms. The iris regions can be approximated by two circles with the snake method, one for the iris–sclerotic boundary and another within the first for the iris–pupil boundary.
There are two steps for detecting iris–pupil boundaries:
  • Finding the initial contour of the pupil and iris; we used the Hough transform to find the pupil circle coordinates and then initialized the contour at these points.
  • Searching the true contour of the pupil and the iris using the active contour method. Figure 3 shows an example of the iris segmentation process [35].

3.2. Feature Extraction

3.2.1. Iris Features Extraction

A 2D Log-Gabor filter is used to capture two-dimensional characteristic patterns. Because of its added dimension, the filter is not only designed for a particular frequency, but also is designed for a particular orientation. The orientation component is a Gaussian distance function according to the angle in polar coordinates. This filter is defined by the following equation:
G ( f , θ ) = exp ( ( log ( f f 0 ) ) 2 2 ( log ( σ f f 0 ) ) 2 ) exp ( ( θ θ 0 ) 2 2 σ θ 2 )
where:
  • f 0 : center frequency;
  • σ f : width parameter for the frequency;
  • θ 0 : center orientation;
  • σ θ : width parameter of the orientation.
This filter is applied to the image by a convolution between the image and the filter. The multi-resolution 2D Log-Gabor filter G ( f s , θ o ) is a 2D Log-Gabor filter used in different scale (s) and orientation (o) [36,37].
The high dimensionality of extracted features causes problems of efficiency and effectiveness in the learning process. One solution for this problem is to reduce the original feature set to a small number of features while gaining improved accuracy and/or efficiency of the biometric system. In this work, spectral regression kernel discriminant analysis (SRKDA) is used. It was proposed by Cai et al. [38] and it is a powerful technique for dimensionality reduction for multi-resolution 2D Log-Gabor features. The SRKDA algorithm is described in [38].

3.2.2. Facial Features Extraction

This paper proposes a new feature extraction method for face recognition based on statistical features generated from SSA-NIG and wavelet methods. This method extracts relevant information that are invariant to illumination and expression variation, and is described as follows.
SSA is a powerful non-parametric technique used in signal processing and time series analysis. It is also a spectral estimation method which is related to eigenvalues of a covariance matrix that can decompose the signal into a sum of components. Each obtained component has a specific interpretation. For example, in a short time series the SSA decomposes the signal into oscillatory components PC (PC1, PC2, PC3, …, PCL). SSA is used to solve several problems such as smoothing, finding structure in short time series, and denoising [39,40,41,42,43].
The SSA technique has two main phases—decomposition and reconstruction of the time series signal—and each phase has its steps. The decomposition process has two steps: the embedding step and singular value decomposition (SVD) step.
Embedding step: transforms a one-dimensional signal YT = (y1, …, yT) into multi-dimensional signals X1, …, XK. Where Xi = (yi, …, yi+L−1) ∈ RL, and K = T – L + 1. The single parameter here is the window length L, which is an integer such that 2 ≤ L ≤ T. The obtained matrix X is called the trajectory matrix X = [X1, …, XK].
Singular value decomposition (SVD) step: computes the singular value decomposition (SVD) of the trajectory matrix. The eigenvalues are denoted by λ1, …, λL and the eigenvectors of the matrix XX are denoted by U1, …, UL. If we denote Vi = XUi/√λi, then the SVD of the trajectory matrix can be written as: X = X1 + …+ Xd, where Xi = √λiUiVi (i = 1, …, d) [43].
A facial image is transformed into one signal vector, then the derived signal is decomposed into multi-dimensional signals (principle components, PCs) by the decomposition process explained previously. The first signal contains the main information that is not affected by noise, variation illumination and expression variation. Figure 4 shows an example of one-dimensional singular spectrum analysis (1D-SSA) of a signal with window of length 4. The original signal is decomposed into four components, PC1, PC2, PC3, PC4.
The NIG probability density function (pdf) can model non-linear signals, such as financial data, economic data, images, and video signals. In this work, NIG modeling is used for capturing the statistical variations in the SSA image signal. The estimated parameters generated by NIG pdf are then used as features. The NIG pdf is a variance-mean mixture density function, in which the mixing distribution is the inverse Gaussian density and is given with the following equation:
P α , δ ( x ) = α δ e α δ π K 1 ( δ 2 + x 2 ) δ 2 + x 2
where:
  • K1(.): first-order modified Bessel function of the second kind.
  • α: denotes the feature factor of the NIG pdf.
  • δ: scale factor.
α controls the steepness of the NIG pdf. If α increases, the steepness of the NIG pdf increases also. On the other hand, scale factor δ controls the dispersion of the NIG pdf [38].
The effect of these two parameters on the shape of the NIG pdf is demonstrated in Figure 5. The NIG parameters are estimated using the following formula.
α = 3 K x 2 K x 4 δ = α K x 2
where K x 2 and K x 4 are the second order and fourth order cumulants of the NIG pdf, respectively. α and δ are computed from each of the SSA segment signals [44].
The NIG pdf models the histogram of facial nonlinear signals as shown in Figure 6.
In addition to mean and standard deviation generated from SSA-NIG, statistical features (entropy, energy and skewness) derived from the third level of wavelet transform are used.

3.3. Classification Process

The proposed system operates in identification mode, in which feature vectors are compared to the stored templates in the database for each biometric trait during the enrollment module. Among the most famous statistical methods of classification, we find the original k-nearest neighbor (K-NN); however, in this work we have investigated and improved the fuzzy k-nearest neighbor (FK-NN) for our multimodal biometrics system for the classification phase [45].

3.4. Fusion Process

The main structure of the proposed multimodal biometric system is based on the effective combination of the face and iris modalities. In our proposal, the system uses score level fusion and decision level fusion at the same time in order to exploit the advantages of each fusion level and improve the performance of the biometric system. In the score level fusion, the scores are normalized with min-max and Z-score techniques, but the fusion is performed with the min rule, max rule, sum rule and weighted sum rule. In the decision level fusion, we have used OR rule.

4. Experimental Results

The goal of this paper is to design an optimal and efficient face–iris multimodal biometric system. We start by evaluating the performance of unimodal systems using only iris modality and only face modality, then propose a multimodal biometric system by combining the two systems selecting the best feature vectors, using score level fusion and decision level fusion at the same time. The iris is a small internal organ, protected by the eyelids and eyelashes when detected from the wall face image. For this reason, it does not affect the performance of the face recognition system; on the other hand, the iris organ is independent from the face. We use a real database as a chimeric database for the implementation of the face–iris multimodal biometric system. In this work, we chose chimeric databases constructed from the Chinese academy of science institute of automation (CASIA) v3.0 iris image (CASIA V3) database, Olivetti research laboratory (ORL) and face recognition technology (FERET) databases; these databases are described as follows.
1)
CASIA iris database: Developed by the Institute of Automation of Chinese Academy of Sciences (CASIA) “Chinese Academy of Sciences Institute of Automation”. Moreover, since it is the oldest, this database is the best known, and is widely used by the majority of researchers. It presents few defects, and very similar and homogeneous characteristics. CASIA-IrisV3-Interval contains 2655 images of irises corresponding to 249 individuals; these images were taken under the same conditions as CASIA V1.0, with a resolution of 320 × 280 pixels [46]. Figure 7a shows example images from the CASIA iris database.
2)
ORL face database: The ORL (Olivetti Research Laboratory) database includes individuals for which each has 10 images with pose and expression variations; the database contains 400 images. These poses are taken at different time intervals. Captured images have a small size (11KB) and 92 × 112 resolution; they have the gray scale called portable graymap format (PGM) format [47]. Figure 7b shows example images from the ORL face database.
3)
FERET face database: A database of facial imagery was collected between December 1993 and August 1996 comprising 11,338 images photographed from 994 subjects at different angles and conditions. They are divided into standard galleries: fa, fb, ra, rb set, etc. In this work, the color FERET database ba, bj, and bk partitions are considered, where ba is a frontal image, bj is an alternative frontal image, and bk is also a frontal image corresponding to ba, but taken under different lighting. The images have resolution of 256 × 384 pixels and are in the joint photographic experts group (jpg) format [48]. Figure 7c shows example images from the FERET face database.

4.1. Evaluations Of Unimodal Biometric Identification Systems

4.1.1. Iris System

In our experiments, every eye image from CASIA interval V3 was segmented and normalized into 240 × 24 pixels as shown in Figure 3. Then, the multi-resolution 2D log Gabor filter was used to extract pertinent features in different scale “s”, orientation “o” and ratio σ/f0. Next, SRKDA was applied to reduce the dimensionality of the vector. A total of 40 subjects were considered and each subject had seven images; one, two, three and four images were selected for training and the remaining images were saved as testing images. The recognition rate is calculated using the following parameters (s = 4, o = 5, σ/f0 = 0.65), (s = 4, o = 5, σ/f0 = 0.85), (s = 5, o = 8, σ/f0 = 0.65) and (s = 5, o = 8, σ/f0 = 0.85). Table 1 shows the recognition of the iris identification rate, while Figure 8 shows the cumulative match characteristic (CMC) curve of the iris recognition system.
Table 2 gives the recognition rates of the iris recognition system when different images are used for training; if the number of training images increases the recognition rate increases also. Using two images for training gives best results when uses one image for training, and so on. On the other hand, the best recognition rate is obtained using parameters (s = 5, o = 8, σ/f0 = 0.85) and four images for training, with 97.33%. Figure 8 shows the CMC curve of the system, which demonstrates that the system achieves the recognition rate of 100% at rank 7.

4.1.2. Face system

Experimental results were obtained from the two face databases, ORL and FERET, from which the goal was to select the best feature vectors and enhance the performance. The face image was enhanced, then the facial image and local regions (noise, mouth and eyes) were detected with the Viola and Jones algorithm as shown in Figure 2. The SSA-NIG method was applied for feature extraction, by selecting different components PC1, PC2, PC3, PC1+PC2, PC1+PC2+PC3, and different window length M of size 5, 9, 12.
In the ORL face database, 40 subjects were considered and each subject had seven images, as for the CASIA iris database. Evaluation tests were performed using one, two and three images for training and the remaining images were used for testing. The obtained evaluation results are shown in Table 3 and Figure 9.
Table 3 demonstrates the effect of window length and principal components used for the feature extraction method. The best recognition rate is obtained when taking three images for training, with recognition rates of 97%, 94%, and 89% for M = 5, M = 9, and M = 12, respectively. We also note that SSA decomposes the signal in components, and the denoising process eliminates the effects of varying illumination. The best results were obtained when we took the first principal component PC1 and window length of M = 5, with recognition rate of 97.00%. From Figure 9, the CMC curve shows that the proposed system achieved 100% at the rank 8.
Experiments were also performed on the FERET database by taking 200 subjects, each with three frontal facial images ba, bj and bk. In the tests, one and two images were used for training and the remaining images were used for testing. Table 4 and Figure 10 show the obtained results.
From Table 4, the best obtained results use two images for training and one image for testing in all experiments. On the other hand, the use of the first component of the SSA signal gives the best results against the PC2, PC3, PC1+PC2 and PC1+PC2+PC3. We also note that the use of window length M= 5 with the use of the first component achieved a good recognition rate of 95.00%. Figure 10 shows that the system achieved a recognition rate of 100% at the rank 9.

4.2. Evaluations Of Multimodal Biometric Identification Systems

Experimental results of the proposed face–iris multimodal biometric system, are presented in this section. They are conducted on two chimeric multimodal databases; the first database is the “CASIA iris-ORL face database” and the second database is the “CASIA iris-FERET face database”. In the previous section, evaluation of unimodal biometric system was performed in order to select the best parameter for the feature extraction step of the face and iris unimodal biometric systems, and hence construct a robust multimodal biometric system by combining the two unimodal systems with the proposed fusion scheme shown in Figure 11. The simplest idea for creating a multimodal database is to create “virtual” individuals by randomly associating the identity of different individuals from different databases; in this case face and iris databases are associated.

4.2.1. Tests on CASIA-ORL Multimodal Database

In the evaluation process, 40 subjects were considered and each subject had seven images. We choose three images for training and the remaining images were testing images. The proposed fusion scheme was implemented, in which min-max and Z-score normalization methods were used to normalize scores generated with the face and iris systems. The min rule, max rule, sum rule and weighted sum rule were used as fusion rules for the proposed system. Moreover, decision level fusion was performed with the OR rule. The fusion rules used were defined by the following equations.
The quantity n i m represents the normalized score for matcher m (m = 1, 2, …, M, where M is the number of matchers) applied to user in which ( i = 1 , 2 , , I , where I is the number of individuals in the database). The fused score for user i is denoted as f i [2] and given by:
  • Sum rule:
f i = m = 1 M n i m , i
  • Maximum rule (Max rule):
f i = max ( n i 1 , n i 2 , .. , n i M ) , i
  • Minimum rule (Min rule):
f i = min ( n i 1 , n i 2 , .. , n i M ) , i
  • Weighted sum rule fusion:
f i = w 1 n i 1 + w 2 n i 2 + + w i M
Experimental results are shown in Table 5 and Figure 12. The best recognition rate of the proposed face–iris multimodal biometric system is obtained by normalization using min-max and fusion with the max rule. A recognition rate of 99.16% was reached at rank 1. The CMC curve in Figure 12 demonstrates that the proposed system achieved 100% at rank 5.

4.2.2. Tests on CASIA-FERET Multimodal Database

In this experiment, 200 subjects were taken from the CASIA and FERET database randomly to construct a chimeric multimodal database. Each subject had three images; two images were used for training and one image was used for testing. Implementation of the proposed fusion scheme, as in the first database was realized. The obtained results are shown in Table 5 and Figure 13.
Table 6 gives the recognition rates of the proposed multimodal system using the min-max and Z-score normalization method and the min rule. The max rule, sum rule and weighted sum rule were used as fusion methods. The best recognition rate reached 99.33% with min-max normalization and max rule fusion. On the other hand, the proposed system is robust and achieved a recognition rate of 100% at rank 3.

5. Conclusion

This paper describes an effective and efficient face–iris multimodal biometric system that has appealingly low complexity, and focusing on diverse and complementary features. The iris features are carried out with a multi-resolution 2D Log-Gabor filter combined with SRKDA, while the facial features are computed using the SSA-NIG method. The evaluation of the unimodal biometric trait allows selecting the best parameters of the two feature extraction methods to construct a reliable multimodal system. The fusion of face–iris features is performed using score fusion and decision fusion. Experiments are performed on CASIA-ORL and CASIA-FERET databases. The obtained experiment results have shown that the proposed face–iris multimodal system improves the performance of unimodal biometrics based on face or iris. The best recognition rate is obtained with min-max normalization and max rule fusion, with higher recognition rates up to 99.16% and 99.33% for CASIA-ORL and CASIA-FERET databases, respectively. In future work, we plan to explore the potential of deep learning to extract high-level representations from data, which will be combined with traditional machine learning to compute useful features.

Author Contributions

The article conceptualization, T.B., M.R., L.B. and B.A.; methodology, T.B., L.B., M.R., and B.A.; software, T.B. and B.A.; validation, T.B. and B.A.; formal analysis, B.A., T.B., L.B. and M.R.; investigation, B.A., T.B., M.R. and L.B.; resources, B.A., T.B., M.R. and L.B.; data curation, B.A., T.B., M.R. and L.B.; writing—original draft preparation, B.A. and T.B.; writing—review and editing, B.A., T.B., L.B. and M.R.; visualization, B.A., T.B., M.R. and L.B.; supervision, T.B. and L.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NDT Laboratory of Jijel University (Ministry of Higher Education and Scientific Research of the People’s Republic Democratic of Algeria) and LIASD Laboratory, University of Paris 8 (France).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eskandari, M.; Toygar, Ö. A new approach for face-iris multimodal biometric recognition using score fusion. Int. J. Pattern Recognit. Artif. Intell. 2013, 27. [Google Scholar] [CrossRef]
  2. Ammour, B.; Bouden, T.; Boubchir, L. Face-Iris Multimodal Biometric System Based on Hybrid Level Fusion. In Proceedings of the 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018. [Google Scholar]
  3. Kabir, W.; Omair Ahmad, M.; Swamy, M.N.S. Normalization and Weighting Techniques Based on Genuine-impostor Score Fusion in Multi-biometric Systems. IEEE Trans. Inf. Forensics Secur. 2018, 13. [Google Scholar] [CrossRef]
  4. Matin, A.; Mahmud, F.; Ahmed, T.; Ejaz, M.S. Weighted Score Level Fusion of Iris and Face to Identify an Individual. In Proceedings of the International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017. [Google Scholar]
  5. Sim, M.H.; Asmuni, H.; Hassan, R.; Othman, R.M. Multimodal biometrics: Weighted score level fusion based on non-ideal iris and face images. Expert Syst. Appl. 2014, 41, 5390–5404. [Google Scholar] [CrossRef]
  6. Morizet, N. Reconnaissance Biométrique par Fusion Multimodale du Visage et de l’Iris. Ph.D. Thesis, National School of Telecommunications and Electronics of Paris, Paris, French, 2009. [Google Scholar]
  7. Jamdar, C.; Boke, A. review paper on person identification system using multi-model biometric based on face. Int. J. Sci. Eng. Technol. Res. 2017, 6, 626–629. [Google Scholar]
  8. Jain, A.K.; Nandakumar, K.; Ross, A. Score normalization in multimodal biometric systems. Pattern Recognit. 2005, 38, 2270–2285. [Google Scholar] [CrossRef]
  9. Brunelli, R.; Falavigna, D. Person identification using multiple cues. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 955–966. [Google Scholar] [CrossRef]
  10. Hong, L.; Jain, A. Integrating faces and fingerprints for person identification. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1295–1307. [Google Scholar] [CrossRef] [Green Version]
  11. Kittler, J.; Messer, K. Fusion of Multiple Experts in Multimodal Biometric Personal Identity Verification Systems. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, Martigny, Switzerland, 9–11 December 2002. [Google Scholar]
  12. Ross, A.; Jain, A.K. Information fusion in biometrics. Pattern Recognit. Lett. 2003, 24, 2115–2125. [Google Scholar] [CrossRef]
  13. Feng, G.; Dong, K.; Hu, D. When Faces Re-combined With Palmprints: A Novel Biometric Fusion strategy. In Proceedings of the International Conference on Biometric Authentication, HongKong, China, 15–17 July 2004; pp. 701–707. [Google Scholar]
  14. Li, Q.; Qiu, Z.; Sun, D. Feature-level Fusion of Hand Biometrics for Personal Verification Based on Kernel PCA. In Lecture Notes in Computer Science, Advances in Biometrics; Zhang, D., Jain, A.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3832. [Google Scholar]
  15. Meraoumia, A.; Chitroub, S.; Bouridane, A. Fusion of Finger-Knuckle-Print and Palmprint for an Efficient Multi-biometric System of Person Recognition. In Proceedings of the IEEE ICC, Kyoto, Japan, 5–9 June 2011. [Google Scholar]
  16. Lin, S.; Wang, Y.; Xu, T.; Tang, Y. Palmprint and Palm Vein Multimodal Fusion Biometrics Based on MMNBP. In Biometric Recognition, Lecture Notes in Computer Science; You, Z., Zhou, J., Wang, Y., Sun, Z., Shan, S., Zheng, W., Feng, J., Zhao, Q., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; Volume 9967, pp. 326–336. [Google Scholar]
  17. Elhoseny, M.; Essa, E.; Elkhateb, A.; Hassanien, A.E.; Hamad, A. Cascade Multimodal Biometric System Using Fingerprint and Iris Patterns. In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics, Cairo, Egypt, 26–28 October 2017; pp. 590–599. [Google Scholar]
  18. Hezil, N.; Boukrouche, A. Multimodal biometric recognition using human ear and palmprint. IET Biom. 2017, 6, 351–359. [Google Scholar] [CrossRef]
  19. Walia, G.S.; Singh, T.; Singh, K.; Verma, N. Robust Multimodal Biometric System based on Optimal Score Level Fusion Model. Expert Syst. Appl. 2019, 116, 364–376. [Google Scholar] [CrossRef]
  20. Mansour, A.; Sadik, M.; Sabir, E.; Jebbar, M. AMBAS: An autonomous multimodal biometric authentication system. Int. J. Auton. Adapt. Commun. Syst. 2019, 12, 187–217. [Google Scholar]
  21. Sharma, D.; Kumar, A. An Empirical Analysis Over the Four Different Feature-Based Face and Iris Biometric Recognition Techniques. Int. J. Adv. Comput. Sci. Appl. 2012, 3, 13. [Google Scholar] [CrossRef] [Green Version]
  22. Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. Int. J. Comput. Vis. 2019, 127, 74–109. [Google Scholar] [CrossRef] [Green Version]
  23. Son, B.; Lee, Y. Biometric authentication system using reduced joint feature vector of iris and face. In Audio-and Video-Based Biometric Person Authentification; Lecture Notes in Computer Science; Kanade, T., Jain, A., Ratha, N.K., Eds.; Springer: Berlin/ Heidelberg, Germany, 2005; Volume 3546, pp. 513–522. [Google Scholar]
  24. Zhang, Z.; Wang, R.; Pan, K.; Li, S.Z.; Zhang, P. Fusion of Near Infrared Face and Iris Biometrics. In Advances in Biometrics, Lecture Notes in Computer Science; Lee, S.W., Li, S.Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4642, pp. 172–180. [Google Scholar]
  25. Morizet, N.; Gilles, J. A new adaptive combination approach to score level fusion for face and iris biometrics combining wavelets and statistical moments. In Proceedings of the 4th International Symposium on Advances in Visual Computing, Las Vegas, NV, USA, 1–3 December 2008; pp. 661–671. [Google Scholar]
  26. Rattani, A.; Tistarelli, M. Robust multi-modal and multi-unit feature level fusion of face and iris biometrics. In Advances in Biometrics, Lecture Notes in Computer Science; Tistarelli, M., Nixon, M.S., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5558, pp. 960–969. [Google Scholar]
  27. Wang, Z.; Wang, E.; Wang, S.H.; Ding, Q. Multimodal Biometric System Using Face-Iris Fusion Feature. J. Comput. 2011, 6, 931–938. [Google Scholar] [CrossRef] [Green Version]
  28. Roy, K.; O’Connor, B.; Ahmad, F. Multibiometric System Using Level Set, Modified LBP and Random Forest. Int. J. Image Graph. 2014, 14, 1–19. [Google Scholar] [CrossRef]
  29. Eskandari, M.; Toygar, O. Fusion of face and iris biometrics using local and global feature extraction methods, Signal. Image Video Process. 2014, 8, 995–1006. [Google Scholar] [CrossRef]
  30. Huo, G.; Liu, Y.; Zhu, X.; Dong, H.; He, F. Face–iris multimodal biometric scheme based on feature level fusion. J. Electron. Imaging 2015, 24. [Google Scholar] [CrossRef]
  31. Moutafis, P.; Kakadiaris, I.A. Rank-Based Score Normalization for Multi-Biometric Score Fusion. In Proceedings of the IEEE International Symposium on Technologies for Homeland Security, Waltham, MA, USA, 5–6 November 2015. [Google Scholar]
  32. Eskandari, M.; Toygar, Ö. Selection of optimized features and weights on face-iris fusion using distance images. Comput. Vis. Image Underst. 2015, 137, 63–75. [Google Scholar] [CrossRef]
  33. Bouzouina, Y.; Hamami, L. Multimodal Biometric: Iris and face Recognition based on feature selection of Iris with GA and scores level fusion with SVM. In Proceedings of the International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 30 August–1 September 2017. [Google Scholar]
  34. Yang, J.; Zhang, D.; Yang, J.-Y.; Niu, B. Globally maximizing, locally minimizing: Unsupervised discriminant projection with applications to face and palm biometrics. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 650–664. [Google Scholar] [CrossRef] [Green Version]
  35. Ammour, B.; Bouden, T.; Amira-Biad, S. Multimodal biometric identification system based on the face and iris. In Proceedings of the International Conference on Electrical Engineering, Boumerdes, Algeria, 29–31 October 2017. [Google Scholar]
  36. Du, Y. Using 2D Log-Gabor Spatial Filters for Iris Recognition. In Proceedings of the Biometric Technology for Human Identification, Florida, FL, USA, 17–21 April 2006. [Google Scholar]
  37. Bounneche, M.D.; Boubchir, L.; Bouridane, A. Multi-spectral palmprint Recognition based on Oriented Multiscale Log-Gabor Filters. Neurocomputing 2016, 205, 274–286. [Google Scholar] [CrossRef] [Green Version]
  38. Cai, D.; He, X.; Han, J. Speed up kernel discriminant analysis. Int. J. Very Large Data Bases 2011, 20, 21–33. [Google Scholar] [CrossRef] [Green Version]
  39. Kume, K.; Nose-Togawa, N. Filter Characteristics in Image Decomposition with Singular Spectrum Analysis. Adv. Data Sci. Adapt. Anal. 2016, 8, 1650002. [Google Scholar] [CrossRef] [Green Version]
  40. Zabalza, J.; Ren, J.; Marshall, S. Singular Spectrum Analysis for effective noise removal and improved data classification in Hyperspectral Imaging. In Proceedings of the IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014. [Google Scholar]
  41. Golyandina, N.; Korobeynikov, A.; Zhigljavsky, A. Singular Spectrum Analysis with R (Use R!), 1st ed.; Springer: Berlin/ Heidelberg, Germany, 2018; ISBN-10 3662573784, ISBN-13 978-3662573785. [Google Scholar]
  42. Leles, M.C.R.; Sansão, J.P.H.; Mozelli, L.A.; Guimarães, H.N. Improving reconstruction of time-series based in Singular Spectrum Analysis: A segmentation approach. Elsevier Digital Signal Process. 2018, 77, 63–76. [Google Scholar] [CrossRef]
  43. Hassani, H. Singular Spectrum Analysis: Methodology and Comparison. J. Data Sci. 2007, 5, 239–257. [Google Scholar]
  44. Rashik Hassan, A.; Hassan Bhuiyan, M.I. An automated method for sleep staging from EEG signals using normal inverse Gaussian parameters and adaptive boosting. Neurocomputing 2017, 5, 76–87. [Google Scholar] [CrossRef]
  45. Shang, W.; Huang, H.; Zhu, H.; Lin, Y.; Wang, Z.; Qu, Y. An Improved kNN Algorithm-Fuzzy kNN. In Computational Intelligence and Security; CIS 2005; Hao, Y., Liu, J., Wang, Y., Cheung, Y., Yin, H., Jiao, L., Ma, J., Jiao, Y.-C., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3801. [Google Scholar]
  46. CASIA-IrisV3 Database. Available online: http://www.cbsr.ia.ac.cn/IrisDatabase.htm (accessed on 15 December 2019).
  47. ORL. Available online: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html (accessed on 15 December 2019).
  48. FERET Database. Available online: http://www.nist.gov/feret/gnd/feret_gnd.dtd (accessed on 15 December 2019).
Figure 1. Block diagram of the proposed multimodal biometric system.
Figure 1. Block diagram of the proposed multimodal biometric system.
Electronics 09 00085 g001
Figure 2. Face detection and preprocessing.
Figure 2. Face detection and preprocessing.
Electronics 09 00085 g002
Figure 3. Iris segmentation and normalization.
Figure 3. Iris segmentation and normalization.
Electronics 09 00085 g003
Figure 4. One-dimensional singular spectrum analysis (1D-SSA) of signal.
Figure 4. One-dimensional singular spectrum analysis (1D-SSA) of signal.
Electronics 09 00085 g004
Figure 5. Effect of α and δ on the shape of normal inverse Gaussian of probability density function (NIG pdf).
Figure 5. Effect of α and δ on the shape of normal inverse Gaussian of probability density function (NIG pdf).
Electronics 09 00085 g005
Figure 6. The corresponding normal inverse Gaussian of probability density functions (NIG pdfs) constructed from the estimated α and δ (in red).
Figure 6. The corresponding normal inverse Gaussian of probability density functions (NIG pdfs) constructed from the estimated α and δ (in red).
Electronics 09 00085 g006
Figure 7. Examples of face and iris images from (a) Chinese academy of science institute of automation (CASIA), (b) Olivetti research laboratory (ORL), and (c) Face recognition technology (FERET).
Figure 7. Examples of face and iris images from (a) Chinese academy of science institute of automation (CASIA), (b) Olivetti research laboratory (ORL), and (c) Face recognition technology (FERET).
Electronics 09 00085 g007
Figure 8. Cumulative match characteristic (CMC) curve for the Iris unimodal biometric system performed on the Chinese academy of sciences institute of automation (CASIA) database.
Figure 8. Cumulative match characteristic (CMC) curve for the Iris unimodal biometric system performed on the Chinese academy of sciences institute of automation (CASIA) database.
Electronics 09 00085 g008
Figure 9. Cumulative match characteristic (CMC) curve for face unimodal biometric system performed on the Olivetti research laboratory (ORL) face database.
Figure 9. Cumulative match characteristic (CMC) curve for face unimodal biometric system performed on the Olivetti research laboratory (ORL) face database.
Electronics 09 00085 g009
Figure 10. Cumulative match characteristic (CMC) curve of face unimodal biometric system performed on the facial recognition technology (FERET) face database.
Figure 10. Cumulative match characteristic (CMC) curve of face unimodal biometric system performed on the facial recognition technology (FERET) face database.
Electronics 09 00085 g010
Figure 11. Scheme of the proposed face-iris multimodal biometric system.
Figure 11. Scheme of the proposed face-iris multimodal biometric system.
Electronics 09 00085 g011
Figure 12. Cumulative match characteristic (CMC) curve for proposed face-iris multimodal system on CASIA-ORL database.
Figure 12. Cumulative match characteristic (CMC) curve for proposed face-iris multimodal system on CASIA-ORL database.
Electronics 09 00085 g012
Figure 13. Cumulative match characteristic (CMC) curve for proposed face-iris multimodal system on CASIA-FERET database.
Figure 13. Cumulative match characteristic (CMC) curve for proposed face-iris multimodal system on CASIA-FERET database.
Electronics 09 00085 g013
Table 1. Related works integrating both face and iris modalities.
Table 1. Related works integrating both face and iris modalities.
AuthorsFeature ExtractionFusion process and NormalizationMatching
B. Son and Y. Lee [23] (2005)Multi-level 2D Daubechies wavelet transform is used for features extraction. For dimensionality reduction, authors have appealed to the direct linear discriminant analysis (DLDA).-Feature-level fusion,
-Features are concatenated.
Euclidean distance.
Z. Zhang et al. [24] (2007)Near infrared (NIR) face–iris image database is used. Face recognition based on eigenface, but iris recognition system is based on Daugman’s algorithm.-Score-level fusion,
-Min-Max, normalization,
-Sum rule and product rule.
Hamming distance.
N. Morizet and J. Gilles [25] (2008)Facial features extracted by Log-Gabor principal component analysis (PCA) (LGPCA), while iris features extracted with 3-level wavelet packets.-Score-level fusion,
-Matchers are modeled as a Gaussian distribution.
Cosine similarity.
A. Rattani and M. Tistarelli [26] (2009)Scale invariant feature transform (SIFT) and spatial sampling method are used for the selection process.-Feature-level fusion,
-Features are concatenated.
Euclidean distance.
Z. Wang et al. [27] (2011)Facial features extracted with eigenface, while iris features are based on Daugman’s algorithm.-Feature level fusion.Euclidean distance.
K. Roy et al. [28] (2014)Histogram of modified local binary pattern (MLBP). Optimal sub set of features is selected with random forest (RF).-Feature-level fusion,
-Features are concatenated.
Manhattan distance.
M. Eskandari and O. Toygar [29] (2014)Face features extracted with LBP as local extractor and the iris features extracted with subspace LDA as global extractor.-Scores fusion,
-Score normalization with Tanh,
-Fusion with weighted sum rule.
Euclidean distance and Hamming distance.
G. Huo et al. [30] (2015)2D Gabor filter with different scales and orientations, then transform them by histogram statistics into an energy-orientation. PCA method is used for dimensionality reduction.-Feature level fusion,
-Features are concatenated.
Support vectormachine (SVM).
H. M. Sim et al. [5] (2014)Facial features are extracted with eigenface, while the iris features are extracted with the NeuWave Network method.-Weighted score level fusion.Euclidean distance and Hamming distance.
P. Moutafis et al. [31] (2015)Facial features are extracted with eigenface, while iris features are based on Daugman’s algorithm.-Score level fusion,
-Rank-based score normalization framework (RBSN).
Pairwise distances.
M. Eskandari and O. Toygar [32] (2015)Iris features are extracted with 1D Log-Gabor filter. For the face, five local and global kinds of features extracted using subspace PCA, modular PCA and LBP. In selection process, authors employed particle swarm optimization (PSO).-Score level fusion and feature level fusion,
-Normalization process with Tanh.
Weighted sum rule.
Y. Bouzouina et al. [33] (2017)Facial features are extracted with PCA and discrete coefficient transform (DCT), while iris features with 1D Log-Gabor filter method and Zernike moment.Genetic algorithm (GA) is used for dimensionality reduction. -Scores fusion,
-Tanh normalized with method.
Support vector machine (SVM).
B. Ammour et al. [2] (2018)Multi-resolution two-dimensionalLog-Gabor filter combined with spectral regression kernel discriminant analysis.-Hybrid level of fusion.Euclidean distance.
Table 2. Recognition rate of the iris unimodal biometric system performed on the CASIA database.
Table 2. Recognition rate of the iris unimodal biometric system performed on the CASIA database.
soσ/f01 Image2 Images3 Images4 Images
450.6589.16%95.00%93.33%95.25%
450.8590.75%93.00%94.50%96.00%
580.6592.00%95.33%95.00%96.50%
580.8593.50%96.00%96.75%97.33%
Table 3. Recognition rate of the face unimodal biometric system performed on the ORL face database.
Table 3. Recognition rate of the face unimodal biometric system performed on the ORL face database.
M = 5M = 9
1 Image2 Images3 Images1 Image2 Images3 Images
PC185.62%86.87%97.00%76.33%86.87%94.00%
PC266.87%71.87%78.00%66.22%68.42%75.00%
PC365.62%74.37%82.00%51.22%59.28%71.00%
PC1+PC284.37%87.85%92.00%86.87%84.28%93.00%
PC1+PC2+PC385.62%83.57%90.00%83.75%82.85%90.00%
M = 12
1 Image2 Images3 Images
PC177.77%80.71%89.00%
PC266.25%67.85%78.00%
PC359.28%60.62%71.00%
PC1+PC275.00%83.57%93.00%
PC1+PC2+PC373.12%87.14%91.00%
Table 4. Recognition rate of the face unimodal biometric system performed on the FERET face database.
Table 4. Recognition rate of the face unimodal biometric system performed on the FERET face database.
M = 5M = 9M = 12
1 Image2 Images1 Image2 Images1 Image2 Images
PC190.87%95.00%88.75%92.00%91.33%94.00%
PC284.33%87.33%75.33%90.50%83.75%85.25%
PC382.50%85.50%70.00%87.33%79.50%80.33%
PC1+PC287.00%93.75%89.50%90.75%89.33%94.75%
PC1+PC2+PC385.25%91.00%88.00%90.00%86.50%93.00%
Table 5. Recognition rates of the proposed face–iris multimodal system on the CASIA-ORL database.
Table 5. Recognition rates of the proposed face–iris multimodal system on the CASIA-ORL database.
Min-MaxZ-Score
Min rule68.33%56.83%
Max rule99.16%97.50%
Sum rule98.33%80.25%
Weighted sum rule97.50%76.66%
Table 6. Recognition rates of the proposed face–iris multimodal system on CASIA-FERET database.
Table 6. Recognition rates of the proposed face–iris multimodal system on CASIA-FERET database.
Min-MaxZ-Score
Min rule86.66%95.50%
Max rule99.33%99.00%
Sum rule98.50%98.00%
Weighted sum rule96.00%97.16%

Share and Cite

MDPI and ACS Style

Ammour, B.; Boubchir, L.; Bouden, T.; Ramdani, M. Face–Iris Multimodal Biometric Identification System. Electronics 2020, 9, 85. https://doi.org/10.3390/electronics9010085

AMA Style

Ammour B, Boubchir L, Bouden T, Ramdani M. Face–Iris Multimodal Biometric Identification System. Electronics. 2020; 9(1):85. https://doi.org/10.3390/electronics9010085

Chicago/Turabian Style

Ammour, Basma, Larbi Boubchir, Toufik Bouden, and Messaoud Ramdani. 2020. "Face–Iris Multimodal Biometric Identification System" Electronics 9, no. 1: 85. https://doi.org/10.3390/electronics9010085

APA Style

Ammour, B., Boubchir, L., Bouden, T., & Ramdani, M. (2020). Face–Iris Multimodal Biometric Identification System. Electronics, 9(1), 85. https://doi.org/10.3390/electronics9010085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop