Face–Iris Multimodal Biometric Identification System
Abstract
:1. Introduction
2. Related Works
3. Proposed Multimodal Biometric System
3.1. Pre-processing
- Finding the initial contour of the pupil and iris; we used the Hough transform to find the pupil circle coordinates and then initialized the contour at these points.
3.2. Feature Extraction
3.2.1. Iris Features Extraction
- : center frequency;
- : width parameter for the frequency;
- : center orientation;
- : width parameter of the orientation.
3.2.2. Facial Features Extraction
- K1(.): first-order modified Bessel function of the second kind.
- α: denotes the feature factor of the NIG pdf.
- δ: scale factor.
3.3. Classification Process
3.4. Fusion Process
4. Experimental Results
- 1)
- CASIA iris database: Developed by the Institute of Automation of Chinese Academy of Sciences (CASIA) “Chinese Academy of Sciences Institute of Automation”. Moreover, since it is the oldest, this database is the best known, and is widely used by the majority of researchers. It presents few defects, and very similar and homogeneous characteristics. CASIA-IrisV3-Interval contains 2655 images of irises corresponding to 249 individuals; these images were taken under the same conditions as CASIA V1.0, with a resolution of 320 × 280 pixels [46]. Figure 7a shows example images from the CASIA iris database.
- 2)
- ORL face database: The ORL (Olivetti Research Laboratory) database includes individuals for which each has 10 images with pose and expression variations; the database contains 400 images. These poses are taken at different time intervals. Captured images have a small size (11KB) and 92 × 112 resolution; they have the gray scale called portable graymap format (PGM) format [47]. Figure 7b shows example images from the ORL face database.
- 3)
- FERET face database: A database of facial imagery was collected between December 1993 and August 1996 comprising 11,338 images photographed from 994 subjects at different angles and conditions. They are divided into standard galleries: fa, fb, ra, rb set, etc. In this work, the color FERET database ba, bj, and bk partitions are considered, where ba is a frontal image, bj is an alternative frontal image, and bk is also a frontal image corresponding to ba, but taken under different lighting. The images have resolution of 256 × 384 pixels and are in the joint photographic experts group (jpg) format [48]. Figure 7c shows example images from the FERET face database.
4.1. Evaluations Of Unimodal Biometric Identification Systems
4.1.1. Iris System
4.1.2. Face system
4.2. Evaluations Of Multimodal Biometric Identification Systems
4.2.1. Tests on CASIA-ORL Multimodal Database
- Sum rule:
- Maximum rule (Max rule):
- Minimum rule (Min rule):
- Weighted sum rule fusion:
4.2.2. Tests on CASIA-FERET Multimodal Database
5. Conclusion
Author Contributions
Funding
Conflicts of Interest
References
- Eskandari, M.; Toygar, Ö. A new approach for face-iris multimodal biometric recognition using score fusion. Int. J. Pattern Recognit. Artif. Intell. 2013, 27. [Google Scholar] [CrossRef]
- Ammour, B.; Bouden, T.; Boubchir, L. Face-Iris Multimodal Biometric System Based on Hybrid Level Fusion. In Proceedings of the 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018. [Google Scholar]
- Kabir, W.; Omair Ahmad, M.; Swamy, M.N.S. Normalization and Weighting Techniques Based on Genuine-impostor Score Fusion in Multi-biometric Systems. IEEE Trans. Inf. Forensics Secur. 2018, 13. [Google Scholar] [CrossRef]
- Matin, A.; Mahmud, F.; Ahmed, T.; Ejaz, M.S. Weighted Score Level Fusion of Iris and Face to Identify an Individual. In Proceedings of the International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017. [Google Scholar]
- Sim, M.H.; Asmuni, H.; Hassan, R.; Othman, R.M. Multimodal biometrics: Weighted score level fusion based on non-ideal iris and face images. Expert Syst. Appl. 2014, 41, 5390–5404. [Google Scholar] [CrossRef]
- Morizet, N. Reconnaissance Biométrique par Fusion Multimodale du Visage et de l’Iris. Ph.D. Thesis, National School of Telecommunications and Electronics of Paris, Paris, French, 2009. [Google Scholar]
- Jamdar, C.; Boke, A. review paper on person identification system using multi-model biometric based on face. Int. J. Sci. Eng. Technol. Res. 2017, 6, 626–629. [Google Scholar]
- Jain, A.K.; Nandakumar, K.; Ross, A. Score normalization in multimodal biometric systems. Pattern Recognit. 2005, 38, 2270–2285. [Google Scholar] [CrossRef]
- Brunelli, R.; Falavigna, D. Person identification using multiple cues. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 955–966. [Google Scholar] [CrossRef]
- Hong, L.; Jain, A. Integrating faces and fingerprints for person identification. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1295–1307. [Google Scholar] [CrossRef] [Green Version]
- Kittler, J.; Messer, K. Fusion of Multiple Experts in Multimodal Biometric Personal Identity Verification Systems. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, Martigny, Switzerland, 9–11 December 2002. [Google Scholar]
- Ross, A.; Jain, A.K. Information fusion in biometrics. Pattern Recognit. Lett. 2003, 24, 2115–2125. [Google Scholar] [CrossRef]
- Feng, G.; Dong, K.; Hu, D. When Faces Re-combined With Palmprints: A Novel Biometric Fusion strategy. In Proceedings of the International Conference on Biometric Authentication, HongKong, China, 15–17 July 2004; pp. 701–707. [Google Scholar]
- Li, Q.; Qiu, Z.; Sun, D. Feature-level Fusion of Hand Biometrics for Personal Verification Based on Kernel PCA. In Lecture Notes in Computer Science, Advances in Biometrics; Zhang, D., Jain, A.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3832. [Google Scholar]
- Meraoumia, A.; Chitroub, S.; Bouridane, A. Fusion of Finger-Knuckle-Print and Palmprint for an Efficient Multi-biometric System of Person Recognition. In Proceedings of the IEEE ICC, Kyoto, Japan, 5–9 June 2011. [Google Scholar]
- Lin, S.; Wang, Y.; Xu, T.; Tang, Y. Palmprint and Palm Vein Multimodal Fusion Biometrics Based on MMNBP. In Biometric Recognition, Lecture Notes in Computer Science; You, Z., Zhou, J., Wang, Y., Sun, Z., Shan, S., Zheng, W., Feng, J., Zhao, Q., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; Volume 9967, pp. 326–336. [Google Scholar]
- Elhoseny, M.; Essa, E.; Elkhateb, A.; Hassanien, A.E.; Hamad, A. Cascade Multimodal Biometric System Using Fingerprint and Iris Patterns. In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics, Cairo, Egypt, 26–28 October 2017; pp. 590–599. [Google Scholar]
- Hezil, N.; Boukrouche, A. Multimodal biometric recognition using human ear and palmprint. IET Biom. 2017, 6, 351–359. [Google Scholar] [CrossRef]
- Walia, G.S.; Singh, T.; Singh, K.; Verma, N. Robust Multimodal Biometric System based on Optimal Score Level Fusion Model. Expert Syst. Appl. 2019, 116, 364–376. [Google Scholar] [CrossRef]
- Mansour, A.; Sadik, M.; Sabir, E.; Jebbar, M. AMBAS: An autonomous multimodal biometric authentication system. Int. J. Auton. Adapt. Commun. Syst. 2019, 12, 187–217. [Google Scholar]
- Sharma, D.; Kumar, A. An Empirical Analysis Over the Four Different Feature-Based Face and Iris Biometric Recognition Techniques. Int. J. Adv. Comput. Sci. Appl. 2012, 3, 13. [Google Scholar] [CrossRef] [Green Version]
- Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. From BoW to CNN: Two Decades of Texture Representation for Texture Classification. Int. J. Comput. Vis. 2019, 127, 74–109. [Google Scholar] [CrossRef] [Green Version]
- Son, B.; Lee, Y. Biometric authentication system using reduced joint feature vector of iris and face. In Audio-and Video-Based Biometric Person Authentification; Lecture Notes in Computer Science; Kanade, T., Jain, A., Ratha, N.K., Eds.; Springer: Berlin/ Heidelberg, Germany, 2005; Volume 3546, pp. 513–522. [Google Scholar]
- Zhang, Z.; Wang, R.; Pan, K.; Li, S.Z.; Zhang, P. Fusion of Near Infrared Face and Iris Biometrics. In Advances in Biometrics, Lecture Notes in Computer Science; Lee, S.W., Li, S.Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4642, pp. 172–180. [Google Scholar]
- Morizet, N.; Gilles, J. A new adaptive combination approach to score level fusion for face and iris biometrics combining wavelets and statistical moments. In Proceedings of the 4th International Symposium on Advances in Visual Computing, Las Vegas, NV, USA, 1–3 December 2008; pp. 661–671. [Google Scholar]
- Rattani, A.; Tistarelli, M. Robust multi-modal and multi-unit feature level fusion of face and iris biometrics. In Advances in Biometrics, Lecture Notes in Computer Science; Tistarelli, M., Nixon, M.S., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5558, pp. 960–969. [Google Scholar]
- Wang, Z.; Wang, E.; Wang, S.H.; Ding, Q. Multimodal Biometric System Using Face-Iris Fusion Feature. J. Comput. 2011, 6, 931–938. [Google Scholar] [CrossRef] [Green Version]
- Roy, K.; O’Connor, B.; Ahmad, F. Multibiometric System Using Level Set, Modified LBP and Random Forest. Int. J. Image Graph. 2014, 14, 1–19. [Google Scholar] [CrossRef]
- Eskandari, M.; Toygar, O. Fusion of face and iris biometrics using local and global feature extraction methods, Signal. Image Video Process. 2014, 8, 995–1006. [Google Scholar] [CrossRef]
- Huo, G.; Liu, Y.; Zhu, X.; Dong, H.; He, F. Face–iris multimodal biometric scheme based on feature level fusion. J. Electron. Imaging 2015, 24. [Google Scholar] [CrossRef]
- Moutafis, P.; Kakadiaris, I.A. Rank-Based Score Normalization for Multi-Biometric Score Fusion. In Proceedings of the IEEE International Symposium on Technologies for Homeland Security, Waltham, MA, USA, 5–6 November 2015. [Google Scholar]
- Eskandari, M.; Toygar, Ö. Selection of optimized features and weights on face-iris fusion using distance images. Comput. Vis. Image Underst. 2015, 137, 63–75. [Google Scholar] [CrossRef]
- Bouzouina, Y.; Hamami, L. Multimodal Biometric: Iris and face Recognition based on feature selection of Iris with GA and scores level fusion with SVM. In Proceedings of the International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 30 August–1 September 2017. [Google Scholar]
- Yang, J.; Zhang, D.; Yang, J.-Y.; Niu, B. Globally maximizing, locally minimizing: Unsupervised discriminant projection with applications to face and palm biometrics. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 650–664. [Google Scholar] [CrossRef] [Green Version]
- Ammour, B.; Bouden, T.; Amira-Biad, S. Multimodal biometric identification system based on the face and iris. In Proceedings of the International Conference on Electrical Engineering, Boumerdes, Algeria, 29–31 October 2017. [Google Scholar]
- Du, Y. Using 2D Log-Gabor Spatial Filters for Iris Recognition. In Proceedings of the Biometric Technology for Human Identification, Florida, FL, USA, 17–21 April 2006. [Google Scholar]
- Bounneche, M.D.; Boubchir, L.; Bouridane, A. Multi-spectral palmprint Recognition based on Oriented Multiscale Log-Gabor Filters. Neurocomputing 2016, 205, 274–286. [Google Scholar] [CrossRef] [Green Version]
- Cai, D.; He, X.; Han, J. Speed up kernel discriminant analysis. Int. J. Very Large Data Bases 2011, 20, 21–33. [Google Scholar] [CrossRef] [Green Version]
- Kume, K.; Nose-Togawa, N. Filter Characteristics in Image Decomposition with Singular Spectrum Analysis. Adv. Data Sci. Adapt. Anal. 2016, 8, 1650002. [Google Scholar] [CrossRef] [Green Version]
- Zabalza, J.; Ren, J.; Marshall, S. Singular Spectrum Analysis for effective noise removal and improved data classification in Hyperspectral Imaging. In Proceedings of the IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, Switzerland, 24–27 June 2014. [Google Scholar]
- Golyandina, N.; Korobeynikov, A.; Zhigljavsky, A. Singular Spectrum Analysis with R (Use R!), 1st ed.; Springer: Berlin/ Heidelberg, Germany, 2018; ISBN-10 3662573784, ISBN-13 978-3662573785. [Google Scholar]
- Leles, M.C.R.; Sansão, J.P.H.; Mozelli, L.A.; Guimarães, H.N. Improving reconstruction of time-series based in Singular Spectrum Analysis: A segmentation approach. Elsevier Digital Signal Process. 2018, 77, 63–76. [Google Scholar] [CrossRef]
- Hassani, H. Singular Spectrum Analysis: Methodology and Comparison. J. Data Sci. 2007, 5, 239–257. [Google Scholar]
- Rashik Hassan, A.; Hassan Bhuiyan, M.I. An automated method for sleep staging from EEG signals using normal inverse Gaussian parameters and adaptive boosting. Neurocomputing 2017, 5, 76–87. [Google Scholar] [CrossRef]
- Shang, W.; Huang, H.; Zhu, H.; Lin, Y.; Wang, Z.; Qu, Y. An Improved kNN Algorithm-Fuzzy kNN. In Computational Intelligence and Security; CIS 2005; Hao, Y., Liu, J., Wang, Y., Cheung, Y., Yin, H., Jiao, L., Ma, J., Jiao, Y.-C., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3801. [Google Scholar]
- CASIA-IrisV3 Database. Available online: http://www.cbsr.ia.ac.cn/IrisDatabase.htm (accessed on 15 December 2019).
- ORL. Available online: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html (accessed on 15 December 2019).
- FERET Database. Available online: http://www.nist.gov/feret/gnd/feret_gnd.dtd (accessed on 15 December 2019).
Authors | Feature Extraction | Fusion process and Normalization | Matching |
---|---|---|---|
B. Son and Y. Lee [23] (2005) | Multi-level 2D Daubechies wavelet transform is used for features extraction. For dimensionality reduction, authors have appealed to the direct linear discriminant analysis (DLDA). | -Feature-level fusion, -Features are concatenated. | Euclidean distance. |
Z. Zhang et al. [24] (2007) | Near infrared (NIR) face–iris image database is used. Face recognition based on eigenface, but iris recognition system is based on Daugman’s algorithm. | -Score-level fusion, -Min-Max, normalization, -Sum rule and product rule. | Hamming distance. |
N. Morizet and J. Gilles [25] (2008) | Facial features extracted by Log-Gabor principal component analysis (PCA) (LGPCA), while iris features extracted with 3-level wavelet packets. | -Score-level fusion, -Matchers are modeled as a Gaussian distribution. | Cosine similarity. |
A. Rattani and M. Tistarelli [26] (2009) | Scale invariant feature transform (SIFT) and spatial sampling method are used for the selection process. | -Feature-level fusion, -Features are concatenated. | Euclidean distance. |
Z. Wang et al. [27] (2011) | Facial features extracted with eigenface, while iris features are based on Daugman’s algorithm. | -Feature level fusion. | Euclidean distance. |
K. Roy et al. [28] (2014) | Histogram of modified local binary pattern (MLBP). Optimal sub set of features is selected with random forest (RF). | -Feature-level fusion, -Features are concatenated. | Manhattan distance. |
M. Eskandari and O. Toygar [29] (2014) | Face features extracted with LBP as local extractor and the iris features extracted with subspace LDA as global extractor. | -Scores fusion, -Score normalization with Tanh, -Fusion with weighted sum rule. | Euclidean distance and Hamming distance. |
G. Huo et al. [30] (2015) | 2D Gabor filter with different scales and orientations, then transform them by histogram statistics into an energy-orientation. PCA method is used for dimensionality reduction. | -Feature level fusion, -Features are concatenated. | Support vectormachine (SVM). |
H. M. Sim et al. [5] (2014) | Facial features are extracted with eigenface, while the iris features are extracted with the NeuWave Network method. | -Weighted score level fusion. | Euclidean distance and Hamming distance. |
P. Moutafis et al. [31] (2015) | Facial features are extracted with eigenface, while iris features are based on Daugman’s algorithm. | -Score level fusion, -Rank-based score normalization framework (RBSN). | Pairwise distances. |
M. Eskandari and O. Toygar [32] (2015) | Iris features are extracted with 1D Log-Gabor filter. For the face, five local and global kinds of features extracted using subspace PCA, modular PCA and LBP. In selection process, authors employed particle swarm optimization (PSO). | -Score level fusion and feature level fusion, -Normalization process with Tanh. | Weighted sum rule. |
Y. Bouzouina et al. [33] (2017) | Facial features are extracted with PCA and discrete coefficient transform (DCT), while iris features with 1D Log-Gabor filter method and Zernike moment.Genetic algorithm (GA) is used for dimensionality reduction. | -Scores fusion, -Tanh normalized with method. | Support vector machine (SVM). |
B. Ammour et al. [2] (2018) | Multi-resolution two-dimensionalLog-Gabor filter combined with spectral regression kernel discriminant analysis. | -Hybrid level of fusion. | Euclidean distance. |
s | o | σ/f0 | 1 Image | 2 Images | 3 Images | 4 Images |
---|---|---|---|---|---|---|
4 | 5 | 0.65 | 89.16% | 95.00% | 93.33% | 95.25% |
4 | 5 | 0.85 | 90.75% | 93.00% | 94.50% | 96.00% |
5 | 8 | 0.65 | 92.00% | 95.33% | 95.00% | 96.50% |
5 | 8 | 0.85 | 93.50% | 96.00% | 96.75% | 97.33% |
M = 5 | M = 9 | |||||
1 Image | 2 Images | 3 Images | 1 Image | 2 Images | 3 Images | |
PC1 | 85.62% | 86.87% | 97.00% | 76.33% | 86.87% | 94.00% |
PC2 | 66.87% | 71.87% | 78.00% | 66.22% | 68.42% | 75.00% |
PC3 | 65.62% | 74.37% | 82.00% | 51.22% | 59.28% | 71.00% |
PC1+PC2 | 84.37% | 87.85% | 92.00% | 86.87% | 84.28% | 93.00% |
PC1+PC2+PC3 | 85.62% | 83.57% | 90.00% | 83.75% | 82.85% | 90.00% |
M = 12 | ||||||
1 Image | 2 Images | 3 Images | ||||
PC1 | 77.77% | 80.71% | 89.00% | |||
PC2 | 66.25% | 67.85% | 78.00% | |||
PC3 | 59.28% | 60.62% | 71.00% | |||
PC1+PC2 | 75.00% | 83.57% | 93.00% | |||
PC1+PC2+PC3 | 73.12% | 87.14% | 91.00% |
M = 5 | M = 9 | M = 12 | ||||
---|---|---|---|---|---|---|
1 Image | 2 Images | 1 Image | 2 Images | 1 Image | 2 Images | |
PC1 | 90.87% | 95.00% | 88.75% | 92.00% | 91.33% | 94.00% |
PC2 | 84.33% | 87.33% | 75.33% | 90.50% | 83.75% | 85.25% |
PC3 | 82.50% | 85.50% | 70.00% | 87.33% | 79.50% | 80.33% |
PC1+PC2 | 87.00% | 93.75% | 89.50% | 90.75% | 89.33% | 94.75% |
PC1+PC2+PC3 | 85.25% | 91.00% | 88.00% | 90.00% | 86.50% | 93.00% |
Min-Max | Z-Score | |
---|---|---|
Min rule | 68.33% | 56.83% |
Max rule | 99.16% | 97.50% |
Sum rule | 98.33% | 80.25% |
Weighted sum rule | 97.50% | 76.66% |
Min-Max | Z-Score | |
---|---|---|
Min rule | 86.66% | 95.50% |
Max rule | 99.33% | 99.00% |
Sum rule | 98.50% | 98.00% |
Weighted sum rule | 96.00% | 97.16% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ammour, B.; Boubchir, L.; Bouden, T.; Ramdani, M. Face–Iris Multimodal Biometric Identification System. Electronics 2020, 9, 85. https://doi.org/10.3390/electronics9010085
Ammour B, Boubchir L, Bouden T, Ramdani M. Face–Iris Multimodal Biometric Identification System. Electronics. 2020; 9(1):85. https://doi.org/10.3390/electronics9010085
Chicago/Turabian StyleAmmour, Basma, Larbi Boubchir, Toufik Bouden, and Messaoud Ramdani. 2020. "Face–Iris Multimodal Biometric Identification System" Electronics 9, no. 1: 85. https://doi.org/10.3390/electronics9010085
APA StyleAmmour, B., Boubchir, L., Bouden, T., & Ramdani, M. (2020). Face–Iris Multimodal Biometric Identification System. Electronics, 9(1), 85. https://doi.org/10.3390/electronics9010085