Hyperspectral Face Recognition with Adaptive and Parallel SVMs in Partially Hidden Face Scenarios
Abstract
:1. Introduction
- An algorithm that uses computer vision techniques to extract facial regions of interests for face recognition of hyperspectral images.
- A significant compression of the spatial information obtained from the facial regions of interest that maintains the uniqueness of the face hyperspectral signature.
- An adaptive and parallel Support Vector Machine tree to distinguish unknown individuals using only the visible regions of interests.
- An evaluation of the proposed model to analyze the recognition accuracy and an analysis of the similarity results.
2. Materials and Methods
2.1. Extracting Spectral Information
2.1.1. Algorithm Notations
- Number of bands (nb). This parameter denotes the number of bands that contains the hyperspectral images. It is provided to the algorithm in order to consider the whole spectral information.
- Degrees threshold (). It determines a threshold of degrees up to the hyperspectral image must be rotated, i.e., all bands are rotated until they all fulfil this requirement.
2.1.2. HyperFEA Algorithm
Algorithm 1 HyperFEA algorithm. |
Inputs: |
, , |
Outputs: |
Algorithm: |
|
2.1.3. Face Alignment and Extracting Facial Landmarks
2.1.4. Extracting Facial Regions of Interests
2.1.5. Spectral Transform
2.2. A Tree Based on Adaptive and Parallel SVMs for Face Recognition
2.2.1. Layer 1: Centroid Classification
2.2.2. Layer 2: Flattened Centroid Classification
2.2.3. Layer 3: Brightest Classification
3. Experimental Results
3.1. Partial Results of the AP-SVM Tree
3.2. Facial Recognition Accuracy
3.3. Performance Evaluation Metrics for the AP-SVM Tree
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Klare, B.F.; Burge, M.J.; Klontz, J.C.; Vorder Bruegge, R.W.; Jain, A.K. Face Recognition Performance: Role of Demographic Information. IEEE Trans. Inf. Forensics Secur. 2012, 7, 1789–1801. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Y.; Ni, H.; Ren, F.; Kang, X. Face and Gender Recognition System Based on Convolutional Neural networks. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 1091–1095. [Google Scholar]
- Liu, Y.J.; Zhang, J.K.; Yan, W.J.; Wang, S.J.; Zhao, G.; Fu, X. A Main Directional Mean Optical Flow Feature for Spontaneous Micro-Expression Recognition. IEEE Trans. Affect. Comput. 2016, 7, 299–310. [Google Scholar] [CrossRef]
- Arellano, P.; Tansey, K.; Balzter, H.; Boyd, D.S. Detecting the effects of hydrocarbon pollution in the Amazon forest using hyperspectral satellite images. Environ. Pollut. 2015, 205, 225–239. [Google Scholar] [CrossRef] [PubMed]
- Calin, M.A.; Calin, A.C.; Nicolae, D.N. Application of airborne and spaceborne hyperspectral imaging techniques for atmospheric research: Past, present, and future. Appl. Spectrosc. Rev. 2020, 56, 1–35. [Google Scholar] [CrossRef]
- Qureshi, R.; Uzair, M.; Zahra, A. Current Advances in Hyperspectral Face Recognition. arXiv 2020, arXiv:12136425.v1. [Google Scholar]
- Alzu’bi, A.; Albalas, F.; AL-Hadhrami, T.; Younis, L.B.; Bashayreh, A. Masked Face Recognition Using Deep Learning: A Review. Electronics 2021, 10, 2666. [Google Scholar] [CrossRef]
- Jignesh Chowdary, G.; Punn, N.S.; Sonbhadra, S.K.; Agarwal, S. Face Mask Detection Using Transfer Learning of InceptionV3. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2020; pp. 81–90. [Google Scholar] [CrossRef]
- Loey, M.; Manogaran, G.; Taha, M.H.N.; Khalifa, N.E.M. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic. Measurement 2021, 167, 108288. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 24–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Singh, S.; Ahuja, U.; Kumar, M.; Kumar, K.; Sachdeva, M. Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment. Multimed. Tools Appl. 2021, 80, 19753–19768. [Google Scholar] [CrossRef] [PubMed]
- Vinh, T.Q.; Anh, N.T.N. Real-Time Face Mask Detector Using YOLOv3 Algorithm and Haar Cascade Classifier. In Proceedings of the 2020 International Conference on Advanced Computing and Applications (ACOMP), Quy Nhon, Vietnam, 25–27 November 2020; pp. 146–149. [Google Scholar] [CrossRef]
- Wu, P.; Li, H.; Zeng, N.; Li, F. FMD-Yolo: An efficient face mask detection method for COVID-19 prevention and control in public. Image Vis. Comput. 2022, 117, 104341. [Google Scholar] [CrossRef] [PubMed]
- Su, X.; Gao, M.; Ren, J.; Li, Y.; Dong, M.; Liu, X. Face mask detection and classification via deep transfer learning. Multimed. Tools Appl. 2021, 81, 4475–4494. [Google Scholar] [CrossRef] [PubMed]
- Ud Din, N.; Javed, K.; Bae, S.; Yi, J. A Novel GAN-Based Network for Unmasking of Masked Face. IEEE Access 2020, 8, 44276–44287. [Google Scholar] [CrossRef]
- Drira, H.; Ben Amor, B.; Srivastava, A.; Daoudi, M.; Slama, R. 3D Face Recognition under Expressions, Occlusions, and Pose Variations. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2270–2283. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gawali, S.; Deshmukh, R. 3D Face Recognition Using Geodesic Facial Curves to Handle Expression, Occlusion and Pose Variations. Int. J. Comput. Sci. IT 2014, 5, 4284–4287. [Google Scholar]
- Hariri, W. Efficient masked face recognition method during the COVID-19 pandemic. Signal Image Video Process. 2021, 5, 605–612. [Google Scholar] [CrossRef]
- Boutros, F.; Damer, N.; Kirchbuchner, F.; Kuijper, A. Unmasking Face Embeddings by Self-restrained Triplet Loss for Accurate Masked Face Recognition. arXiv 2021, arXiv:2103.01716. [Google Scholar]
- Chen, S.; Liu, Y.; Gao, X.; Han, Z. MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1–10. [Google Scholar]
- Anwar, A.; Raychowdhury, A. Masked Face Recognition for Secure Authentication. arXiv 2020, arXiv:2008.11104. [Google Scholar]
- Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A Dataset for Recognising Faces across Pose and Age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Los Alamitos, CA, USA, 15–19 May 2018; pp. 67–74. [Google Scholar] [CrossRef] [Green Version]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar] [CrossRef] [Green Version]
- Uzair, M.; Mahmood, A.; Mian, A. Hyperspectral Face Recognition With Spatiospectral Information Fusion and PLS Regression. IEEE Trans. Image Process. 2015, 24, 1127–1137. [Google Scholar] [CrossRef]
- Bhattacharya, S.; Das, S.; Routray, A. Graph Manifold Clustering based Band Selection for Hyperspectral Face Recognition. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1990–1994. [Google Scholar]
- Chen, Q.; Sun, J.; Palade, V.; Shi, X.; Liu, L. Hierarchical Clustering Based Band Selection Algorithm for Hyperspectral Face Recognition. IEEE Access 2019, 7, 24333–24342. [Google Scholar] [CrossRef]
- Sharma, V.; Diba, A.; Tuytelaars, T.; Gool, L.V. Hyperspectral CNN for Image Classification & Band Selection, with Application to Face Recognition; Technical Report; KU Leuven: Leuven, Belgium, 2016. [Google Scholar]
- Pan, Z.; Healey, G.; Prasad, M.; Tromberg, B. Face recognition in hyperspectral images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1552–1560. [Google Scholar]
- Di, W.; Zhang, L.; Zhang, D.; Pan, Q. Studies on Hyperspectral Face Recognition in Visible Spectrum with Feature Band Selection. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2010, 40, 1354–1361. [Google Scholar] [CrossRef]
- Becattini, F.; Song, X.; Baecchi, C.; Fang, S.T.; Ferrari, C.; Nie, L.; Del Bimbo, A. PLM-IPE: A Pixel-Landmark Mutual Enhanced Framework for Implicit Preference Estimation. In ACM Multimedia Asia; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Graf, H.; Cosatto, E.; Bottou, L.; Dourdanovic, I.; Vapnik, V. Parallel Support Vector Machines: The Cascade SVM. In Advances in Neural Information Processing Systems; Saul, L., Weiss, Y., Bottou, L., Eds.; MIT Press: Cambridge, MA, USA, 2005; Volume 17, pp. 1–8. [Google Scholar]
- Chen, G.; Li, C.; Sun, W. Hyperspectral face recognition via feature extraction and CRC-based classifier. IET Image Process. 2017, 11, 266–272. [Google Scholar] [CrossRef]
- Tsai, A.C.; Ou, Y.Y.; Wu, W.C.; Wang, J.F. Integrated Single Shot Multi-Box Detector and Efficient Pre-Trained Deep Convolutional Neural Network for Partially Occluded Face Recognition System. IEEE Access 2021, 9, 164148–164158. [Google Scholar] [CrossRef]
- Almabdy, S.; Elrefaei, L. Deep Convolutional Neural Network-Based Approaches for Face Recognition. Appl. Sci. 2019, 9, 4397. [Google Scholar] [CrossRef] [Green Version]
- Yuan, L.U.O.; Wu, C.M.; Zhang, Y. Facial expression feature extraction using hybrid PCA and LBP. J. China Univ. Posts Telecommun. 2013, 20, 120–124. [Google Scholar] [CrossRef]
- Zhu, M.H.; Li, S.T.; Ye, H. An Occluded Facial Expression Recognition Method Based on Sparse Representation. Public Relat. Artif. Intell. 2014, 27, 708–712. [Google Scholar]
- Yeh, R.A.; Chen, C.; Lim, T.Y.; Hasegawa-Johnson, M.; Do, M.N. Semantic Image Inpainting with Perceptual and Contextual Losses. arXiv 2016, arXiv:1607.07539. [Google Scholar]
Top | All Face (100% ROI Visible) | Upper Part (50% ROI Visible) | Forehead (25% ROI Visible) |
---|---|---|---|
Top5 Euclidean () | 93% | 93% | 66% |
Top5 Cosine () | 80% | 80% | 60% |
Top3 Euclidean () | 80% | 73% | 60% |
Top3 Cosine () | 80% | 73% | 60% |
Dataset | Extracted | Accuracy | Compression | ||||
---|---|---|---|---|---|---|---|
Dataset/Size | Bands | Spectrum | Features | Bands | Ratio | ||
[28] | 200 | 31 | 0.7–1.0 μm | 5 | 31 | 75% | 99.9995% |
[29] | 25 (PolyU) | 33 | 0.4–0.72 μm | 2700 (54 × 50) | 24 | 78% | 99.2509% |
[27] | CMU | 65 | 0.4–0.72 μm | 69,169 (263 × 263) | 65 | 86.1% | 93.2264% |
[24] | UWA | 33 | 0.4–0.72 μm | 900 (30 × 30) | 4 | 98% | 99.9895% |
PolyU | 24 | 0.45–0.68 μm | 1748 (46 × 38) | 5 | 95.2% | 99.8610% | |
[32] | PolyU | 33 | 0.4–0.72 μm | 4096 (64 × 64) | 4 | 95% | 99.8106% |
CMU | 65 | 0.4–0.72 μm | 4096 (64 × 64) | 37 | 98% | 99.7776% | |
Ours | UWA | 33 | 0.4–0.72 μm | 70 | 33 | 93% | 99.9933% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Caba, J.; Barba, J.; Rincón, F.; de la Torre, J.A.; Escolar, S.; López, J.C. Hyperspectral Face Recognition with Adaptive and Parallel SVMs in Partially Hidden Face Scenarios. Sensors 2022, 22, 7641. https://doi.org/10.3390/s22197641
Caba J, Barba J, Rincón F, de la Torre JA, Escolar S, López JC. Hyperspectral Face Recognition with Adaptive and Parallel SVMs in Partially Hidden Face Scenarios. Sensors. 2022; 22(19):7641. https://doi.org/10.3390/s22197641
Chicago/Turabian StyleCaba, Julián, Jesús Barba, Fernando Rincón, José Antonio de la Torre, Soledad Escolar, and Juan Carlos López. 2022. "Hyperspectral Face Recognition with Adaptive and Parallel SVMs in Partially Hidden Face Scenarios" Sensors 22, no. 19: 7641. https://doi.org/10.3390/s22197641
APA StyleCaba, J., Barba, J., Rincón, F., de la Torre, J. A., Escolar, S., & López, J. C. (2022). Hyperspectral Face Recognition with Adaptive and Parallel SVMs in Partially Hidden Face Scenarios. Sensors, 22(19), 7641. https://doi.org/10.3390/s22197641