Next Article in Journal
Lie and Q-Conditional Symmetries of Reaction-Diffusion-Convection Equations with Exponential Nonlinearities and Their Application for Finding Exact Solutions
Previous Article in Journal
Stability of Bounded Dynamical Networks with Symmetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cosmetic Detection Framework for Face and Iris Biometrics

Department of Computer and Software Engineering, Toros University, Mersin 33140, Turkey
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(4), 122; https://doi.org/10.3390/sym10040122
Submission received: 26 March 2018 / Revised: 15 April 2018 / Accepted: 16 April 2018 / Published: 19 April 2018

Abstract

:
Cosmetics pose challenges to the recognition performance of face and iris biometric systems due to its ability to alter natural facial and iris patterns. Facial makeup and iris contact lens are considered to be commonly applied cosmetics for the face and iris in this study. The present work aims to present a novel solution for the detection of cosmetics in both face and iris biometrics by the fusion of texture, shape and color descriptors of images. The proposed cosmetic detection scheme combines the microtexton information from the local primitives of texture descriptors with the color spaces achieved from overlapped blocks in order to achieve better detection of spots, flat areas, edges, edge ends, curves, appearance and colors. The proposed cosmetic detection scheme was applied to the YMU YouTube makeup database (YMD) facial makeup database and IIIT-Delhi Contact Lens iris database. The results demonstrate that the proposed cosmetic detection scheme is significantly improved compared to the other schemes implemented in this study.

1. Introduction

The recognition performance of face and iris modalities has been considered as two promising biometric traits over the past decade [1,2,3,4,5,6,7,8,9,10,11]. However, the presence of cosmetics has posed challenges related to the performance degradation of face and iris biometrics [12,13]. This study considered the effect of facial makeups and iris contact lenses on the recognition performance of biometric systems. Generally, facial makeups and iris contact lenses are considered as two popular types of cosmetics that are publicly acceptable in several parts of the worlds. Makeups affect the color, shape, texture and format of face images. There are three main categories of makeups that can be applied on faces: light makeup, medium makeup and heavy makeup [14]. On the other hand, the iris contact lenses are divided into two categories: transparent (soft) and color cosmetic (texture) contact lenses. The use of contact lenses, especially textural lenses, alters the texture, appearance and color of iris patterns [15]. Therefore, designing an efficient method to detect facial makeups and contact lenses in face and iris images would benefit face–iris biometric recognition systems in terms of security and recognition performance.
The impact of makeup and contact lens on face and iris images has been discussed in some studies. The authors of a previous study [12] demonstrated the reduced performance of face recognition schemes in the presence of makeup. An automatic facial makeup detection method was presented in another study [16], which captured the local and global information of facial images. In reference [17], the authors applied the Canonical Correlation Analysis (CCA) to learn the meta subspaces for maximizing the correlation of feature vectors belonging to the same individual. The authors of a previous study [18] proposed that the correlation between several facial parts can be learnt using the Partial Least Square (PLS) to improve the verification performance in presence of cosmetics. On the other hand, for iris recognition with contact lenses, a gray-level co-occurrence matrix was proposed for training a support vector machine, which will ultimately improve the classification rate of the cosmetic detection method [19]. The impact of the contact lenses on the iris recognition performance has been analyzed in a previous study [15] using two different datasets. In reference [20], a multimodal biometric system using both irises was applied to investigate the effect of soft and texture contact lenses on the recognition performance of both unimodal and multimodal systems. In reference [21], three different techniques based on iris-textons and the co-occurrence matrix were proposed for detecting texture contact lenses, measuring the iris edge sharpness and characterizing the texture of irises. The authors of a previous study [22] detected texture lenses using Gaussian-smoothed and Scale-Invariant Feature Transform (SIFT)-weighted Local Binary Patterns.
In this study, an efficient scheme is proposed to detect makeup and texture contact lens in face and iris images by utilizing both the global and local information of the modalities. The proposed scheme fuses color-, shape- and texture-based features extracted from the face and/or iris with cosmetics, before the Support Vector Machine (SVM) [23] was applied to detect face–iris cosmetic in the input images. We proposed the extraction of the texture and shape characteristics of facial and iris modalities using a multi-scale local–global technique to collect the microtexton information of local primitives efficiently along with the global features with makeup and texture contact lenses. Therefore, the Log-Gabor transform (L-Gabor) shape descriptor is applied in this study to produce a set of Gabor filters and consequently, the microtexton information of the global and local primitives is extracted using an Overlapped Local Binary Pattern (Ov-LBP). Additionally, in order to collect the color-based information of images with cosmetics, the present work computes the overlapped color moments of the face and iris images to detect cosmetics. Therefore, the current work is the first common scheme that was applied for both the face and iris traits with makeup and texture lens, which fuses the advantages of color, shape and texture patterns to efficiently detect spots, flat areas, edges, edge ends, curves and colors. Indeed, the main contribution of this work is related to the results obtained from extracting and preserving the detailed pattern information of both modalities and utilizing them to decide the presence and/or absence of cosmetics for face and iris biometrics. In the other words, the original contribution of this work can be summarized as the proposal of a multi-modal cosmetic detection system for both face and iris biometrics according to their shapes, color and textures. Therefore, the proposed biometric detection scheme can be applied in any unimodal and/or multimodal face–iris recognition system to improve the security and recognition performance of the system. The proposed cosmetic detection scheme is evaluated on the YMU [12,24] facial makeup database and IIIT-Delhi Contact Lens iris [13] database. The proposed face–iris cosmetic detection scheme is presented and compared with the existing facial and texture lens detection methods in this work using the Classification Rate (CR) and Total Error Rate (TER).
The rest of this paper is organized as follows. Section 2 describes the facial makeup and iris contact lens approaches applied in this study. The explanation of proposed face–iris cosmetic detection scheme is presented in Section 3, while Section 4 concentrates on cosmetic databases and experimental results. Finally, Section 5 draws some conclusions.

2. Cosmetic Detection for Face–Iris Biometrics

Although the main objective of using contact lenses is to correct individual eyesight as an alternative to spectacles/glasses, they can be also used for cosmetic purposes. In general, the use of contact lenses, especially textural lenses, alters the texture and color of iris images and leads to confusion in the natural iris patterns [15]. In addition, facial makeup can be relevant to the aesthetics of an individual face and affects the texture, color and shape of face images [16], resulting in reduced performance of face recognition systems. Therefore, introducing a robust detection scheme is needed for both face and iris biometrics in order to increase the reliability and security of face–iris recognition systems. In this study, we attempted to design a common detection scheme for face–iris biometrics with cosmetics based on their color, shape and texture information. In fact, the color, shape and texture characteristics of both face and iris traits can be affected if an individual uses makeup and contact lens. Therefore, the aim is to utilize the information obtained using these factors and combine them in a scheme to detect makeup and contact lens for both the face and iris.
The Local Binary Pattern (LBP) [25] feature extraction method is considered to be a powerful micro-pattern descriptor to analyze the texture of facial and/or iris images. In order to detect the presence of makeup/contact lens, LBP and numerous variants of LBP has been applied in several studies [15,16,22,24] as successful texture-based approaches. In this work, we aim to use multi-scale Overlapped Local Binary Pattern (multiS-Ov-LBP) technique to collect microtexton information of local primitives from facial makeup and iris contact lens images. Recently, it was shown that extracting detailed texture information from the irises of uniform LBP patterns provides more representative histograms, which is better than analyzing the texture patterns [26]. Uniformity also plays a major role in characterizing the micro-features of facial makeups [16]. On the other hand, the investigation of different combinations of LBP operators utilized the extraction of more micro-texture details to discriminate real and fake face images [27]. Therefore, the focus of this study for discriminating makeup and non-makeup images is on multi-scale LBP operators, including L B P 8 , 1 u 2 , L B P 8 , 2 u 2 and L B P 16 , 2 u 2 . In fact, each operator extracts the histogram of a whole image globally, before the concatenation of the histograms provides a feature set of length 361 (59 + 59 + 243) bin. The extraction of detailed micro-texture information of local primitives using overlapped blocks leads to better recognition and detection of spots, flat areas, edges, edge ends, curves, appearance and colors [16,26], which are considered as important factors for cosmetic applications of face and iris biometrics. Therefore, we intend to apply the overlapped blocks of multi-scale LBP operators to extract more detailed local primitives for cosmetic detection. In order to obtain the local bin histograms of each operator, the images are divided into 3 × 3 overlapping regions with an overlapping size of 15 pixels. The concatenation of three operators leads to 3249 (9 × 59 + 9 × 59 + 9 × 243) bin histograms for one image. Due to the high dimensionality of the features produced by this method, we applied principal component analysis (PCA) and linear discriminant analysis (LDA) to reduce the dimensionality of the feature sets. The proposed scheme combines the global and local features extracted using the LBP texture descriptor to improve the robustness of scheme for cosmetic detection.
Additionally, in order to analyze changes in shape due to cosmetics for the face and iris biometrics, the Log-Gabor transform (L-Gabor) [28] was applied to reflect the frequency response of images more realistically. This study considers five different scales and eight orientations to produce the Log-Gabor transform, with a down-sampling rate of six based on several trial results. The final size of the L-Gabor transformed image was reduced to 40 × 80 for all 40 image outputs. This work considered the computation of color-based features using the overlapped color moments within the overlapped blocks of images. Our experimental result section demonstrated a high cosmetic detection rate for both face and iris modalities with overlapped color moments compared to non-overlapping blocks. To extract the color moments from the entire image, we divided the images into 3 × 3 overlapping regions with an overlapping size of 15 pixels. For each block, the mean, standard deviation and skewness of pixels were calculated as the first, second and third order moment, resulting in 81 color feature sets.

3. Proposed Face–Iris Anti-Cosmetic Scheme

Our proposed cosmetic detection framework combines local and global information extracted from face and/or iris images (Figure 1). The framework improves the cosmetic detection rate of system by combining the local and global information of each modality. The detailed steps applied to design the proposed cosmetic detection scheme for face and iris biometrics are as follows:
Step 1
The image preprocessing step is carried out on all face and iris images separately to detect, scale and localize the face and irises. After this, the images are cropped, aligned and resized to dimensions of 60 × 60 prior to our cosmetic detection experiments. These undergo the histogram equalization method and mean variance normalization.
Step 2
All face and/or iris images are used as inputs for the L-Gabor algorithm for analyzing changes in shape, which produces 40 image outputs. Each one of these 40 output images is considered separately in the local and global feature extraction steps to exploit the features.
Step 3
The global feature extraction step extracts the histogram of a whole image globally for all 40 output images of one individual using three different operators ( L B P 8 , 1 u 2 , L B P 8 , 2 u 2 , L B P 16 , 2 u 2 ) in the multi-scale manner. After this, the concatenation of histograms is considered as the global microtexton information of textures, which is presented in Equation (1):
G F V = S × O   ( L B P 8 , 1 u 2 + L B P 8 , 2 u 2 + L B P 16 , 2 u 2 ) ,
where GFV presents the extracted Global Feature Vector; and S and O describes scales and orientations employed to produce the Log-Gabor transform (five scales and eight orientations).
On the other hand, the proposed pipeline simultaneously extracts details of the micro-texture information of local primitives using the overlapped blocks through multi-scale operators of LBP. After this, the concatenation of overlapped histograms is used to improve cosmetic detection. Additionally, the local feature extraction step extracts the color moments of images through overlapping regions. Subsequently, the result features are concatenated to produce the color feature sets according to Equation (2). It should be stated that the global and local feature extraction steps are separately applied on all 40 output images produced using the L-Gabor texture descriptor of Step 2:
L F V = S × O   ( M   × N   [ ( L B P 8 , 1 u 2 + L B P 8 , 2 u 2 + L B P 16 , 2 u 2 )   +   ρ +   σ   + γ ] ) ,
where LFV represents the extracted Local Feature Vector; S and O describes the scales and orientations employed to produce the Log-Gabor transform; M and N are the sizes of overlapping windows used to divide the images (3 × 3); and ρ ,   σ   and   γ specify the extracted mean, standard deviation and skewness feature vectors of overlapped color moments.
Step 4
The scheme concatenates the advantages of the features achieved using the local and global feature extraction steps as a high dimension feature set according to Equation (3):
C F V F = G F V + L F V .
where CFVF represents Concatenated Feature Vector Fusion.
Step 5
In order to reduce the dimensions of the concatenated features in the global step, local step and concatenated feature set of Step 4, the proposed scheme employs PCA and LDA to obtain appropriate feature subsets of the face and/or iris by eliminating irrelevant and redundant information.
Step 6
The classification is conducted using the SVM classifier to detect cosmetics in 4 different ways for all 40 output images of individuals in the global feature extraction step, local feature extraction step (histogram concatenation and color feature vector) and global + local concatenation step.
Step 7
The last step fuses all decisions achieved using the SVM classifier through the majority voting [29] decision level fusion technique. In fact, for one individual, 160 different decisions (40 × 4) are fused to make the final decision for the detection makeup in face/iris images. In fact, the majority voting combines all 160 decisions obtained by SVM classifiers to produce a final fused decision. In the majority voting technique, usually all classifiers provide an identical confidence in classifying a set of objects via voting. This technique will output the label that receives the majority of the votes. The prediction of each classifier is counted as one vote for the predicted class. At the end of the voting process, the class that received the highest number of votes wins [29].

4. Experimental Results and Databases

This section concentrates on the experimental analysis of the proposed cosmetic detection scheme for face and iris biometrics. The robustness of the proposed scheme against the presence of makeup and contact lens is tested using the experiments. The YouTube makeup database (YMD) introduced by Dantcheva et al. [12,24] is used to evaluate the performance of our proposed pipeline for face makeup images. The database contains 151 Caucasian female subjects, with four samples per subject (two samples before makeup and two samples after the makeup) that vary from subtle to heavy degree of makeup. This study considers 600 images of 150 individuals to perform the experiments, including 300 makeup and 300 non-makeup images. The IIIT-Delhi Contact Lens (IIITD CL) [13] iris database includes 6570 iris images of 101 subjects captured from both eyes variations of lens, including no lens, transparent lens and colored texture lens, which was captured using two iris sensors. In order to evaluate the robustness of the proposed method for iris contact lenses, we construct a dataset containing 100 individuals with six samples from left and right irises randomly. These six samples include two samples without lens, two samples with transparent lens and two samples with colored texture lens. In order to validate the performance of proposed scheme with cosmetics, the whole databases of face and iris are divided into two equal sets as represented in Table 1.
In general, 75 individuals are used to construct the training dataset of the face, 50 individuals to construct the training dataset of iris, while the rest of the individuals are used in the test dataset. It should be stated that the individuals used to construct the training dataset are different to the individuals used in the testing dataset for both biometrics. This study reports the performance of the proposed cosmetic detector scheme and implemented methods in terms of the Classification Rate (CR) and Total Error Rate (TER). The classification rate is the percentage of correct classified cosmetic and non-cosmetic images, while the total error rate is the sum of FAR and FRR, which is equal to twice the value of EER in one biometric system [30,31].
The first set of experiments examines the robustness of proposed cosmetic detector and other color, shape and texture descriptors implemented in this study, such as LBP, L-Gabor, color moments and their global and local variations. Table 2 analyzes the performance of face biometrics in the presence of cosmetics, while SVM is used as the classifier. The best classification and total error rates belong to the proposed scheme detection scheme, which were 91.67% and 3.18%, respectively. On the other hand, the investigation of results in Table 2 shows that applying the color moment extractors leads to a classification rate improvement of 5.67% and 9.67% compared to the L-Gabor shape descriptor and LBP texture descriptor. However, the overlapping results caused better improvement in terms of classification rate and total error rate for color moments and the LBP texture descriptor.
Our analysis in Table 2 demonstrates that utilizing the microtexton information of local overlapped regions of the multi-scale LBP texture descriptor along with the fused overlapped color moments can improve the classification and total error rates of makeup detection for face images by 86.00% and 5.16%. Generally, the fusion of local and global overlapped features of shape, color and texture of face images in an efficient system, such as the proposed system, can improve the detection and classification rate for make-up. We also conducted the experiments to detect the iris cosmetics of transparent and color texture contact lenses separately in Table 3.
The analysis in Table 3 demonstrates the superiority of the proposed method for cosmetic detection specifically in the presence of color cosmetic contact lenses for both classification and total error rates. The classification and total error rate of the proposed scheme in the transparent contact lens dataset is 71.50% and 8.83%, respectively. In the color cosmetic contact lens dataset, these values increase to 78.50% and 6.80%. The combination of the microtexton information of the local overlapped regions of the multi-scale LBP texture descriptors with fused overlapped color moments improved the cosmetic detection of soft and texture images as shown in Table 3. However, the most interesting result is obtained when using the color moment descriptors. Color moments achieved better detection rates for texture cosmetic lenses compared to L-Gabor shape and LBP texture descriptors. However, in the transparent dataset, L-Gabor and LBP obtained higher detection rates. Additionally, the overlapping and multi-scale LBP improves the classification for both transparent and texture contact iris lenses. In order to show the effectiveness of our proposed cosmetic detection scheme, we prepared a comparison with the state-of-the-art approaches using the datasets constructed in this study in terms of classification rate, with the results shown in Table 4.
As shown in Table 4, the best classification rate is obtained using the proposed scheme for face and iris cosmetics with improvements of 0.33%, 0.83% and 2.50% compared to the best classification rates of state-of-the-art techniques. As described above, all the experiments in Table 4 have been conducted on the same datasets used for evaluating the proposed method and therefore, the results depend on the conditions that exist in these datasets.
In order to classify the images using the SVM classifier, this study applied the Radial Basis Function (RBF) kernel function by iterative trials. The regularization constant and kernel width of RBF function (C and γ) have been set to 1 and 2, respectively, during the experiments. The number of eigenvectors used for the projection of images to reduce dimensions is set to L − 1, where L is the number of individuals in each dataset. MATLAB R2009a on a 64-bit windows operating system with Intel Core i5-5200U CPU at 2.20 GHz and 4.00 GB RAM is used to implement and perform the experiments.

5. Conclusions

In this paper, we present a novel cosmetic detection scheme for detecting makeup and contact lenses. The proposed scheme fuses color-, shape- and texture-based features extracted from the face and/or iris with cosmetics, before classification is conducted using a SVM classifier. In general, a multi-scale local-global technique is used in this study to efficiently collect the microtexton information of global and local primitives from faces and/or irises with makeup and contact lenses. Therefore, we applied the L-Gabor shape descriptor in this paper to produce a set of Gabor filters and consequently, the microtexton information of global and local primitives is extracted using Ov-LBP. Additionally, in order to collect the color-based information of images with cosmetics, the present work computes the overlapped color moments of face and iris images using the proposed scheme. This present work provides the first common scheme applied for both face and iris traits with makeup and texture/soft lenses, which fuses the advantages of color, shape and texture patterns to efficiently detect spots, flat areas, edges, edge ends, curves and colors. The experimental results of the proposed scheme demonstrated the robustness of our biometric system compared to the state-of-the-art methods implemented in this study. The proposed scheme obtained classification rates of 91.67% for facial makeup detection in addition to 71.50% and 78.50% for the detection of transparent and color cosmetic contact lenses, respectively.

Author Contributions

The authors performed the experiments and analyzed the results together. Introduction, cosmetic detection algorithms and proposed scheme sections have been written by Omid Sharifi; while experimental results and conclusion sections have been written by Maryam Eskandari.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jain, A.K.; Li, S.Z. Handbook of Face Recognition; Springer: New York, NY, USA, 2011. [Google Scholar]
  2. Bhatt, H.S. Recognizing surgically altered face images using multiobjective evolutionary algorithm. IEEE Trans. Inf. Forensics Secur. 2013, 8, 89–100. [Google Scholar] [CrossRef]
  3. Eskandari, M.; Omid, S. Optimum scheme selection for face–iris biometric. IET Biom. 2016, 6, 334–341. [Google Scholar] [CrossRef]
  4. Sharifi, O.; Eskandari, M.; Yildiz, M.Ç. Scheming an efficient facial recognition system using global and random local feature extraction methods. In Proceedings of the 2017 International Conference on Computer Science and Engineering (UBMK), Antalya, Turkey, 5–8 October 2017. [Google Scholar]
  5. Yildiz, M.C.; Sharifi, O.; Eskandari, M. Log-Gabor transforms and score fusion to overcome variations in appearance for face recognition. In Proceedings of the International Conference on Computer Vision and Graphics, Warsaw, Poland, 19–20 September 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
  6. Määttä, J.; Hadid, A.; Pietikäinen, M. Face spoofing detection from single images using texture and local shape analysis. IET Biometr. 2012, 1, 3–10. [Google Scholar] [CrossRef]
  7. Sharifi, O.; Maryam, E. Optimal face-iris multimodal fusion scheme. Symmetry 2016, 8, 48. [Google Scholar] [CrossRef]
  8. Daugman, J. How iris recognition works. In The Essential Guide to Image Processing; Academic Press: Cambridge, MA, USA, 2009; pp. 715–739. [Google Scholar]
  9. Daugman, J. Demodulation by complex-valued wavelets for stochastic pattern recognition. Int. J. Wavelets Multiresolut. Inf. Process. 2003, 1, 1–17. [Google Scholar] [CrossRef]
  10. Hollingsworth, K.; Bowyer, K.W.; Flynn, P.J. Pupil dilation degrades iris biometric performance. Comput. Vis. Image Underst. 2009, 113, 150–157. [Google Scholar] [CrossRef]
  11. Lee, E.C.; Park, K.R.; Kim, J. Fake iris detection by using purkinje image. In Proceedings of the International Conference on Biometrics, Hongkong, China, 5–7 January 2006; Springer: Berlin, Germany, 2006. [Google Scholar]
  12. Dantcheva, A.; Cunjian, C.; Arun, R. Can facial cosmetics affect the matching accuracy of face recognition systems? In Biometrics: Theory, Applications and Systems (BTAS), Proceedings of the 2012 IEEE Fifth International Conference on Cloud Computing CLOUD 2012, Honolulu, HI, USA, 24–29 June 2012; IEEE: New York, NY, USA, 2012. [Google Scholar]
  13. Kohli, N. Revisiting iris recognition with color cosmetic contact lenses. In Proceedings of the 2013 International Conference on Biometrics (ICB), Madrid, Spain, 4–7 June 2013. [Google Scholar]
  14. Derman, E.; Galdi, C.; Dugelay, J.L. Integrating facial makeup detection into multimodal biometric user verification system. In Proceedings of the 2017 5th International Workshop on Biometrics and Forensics (IWBF), Coventry, UK, 4–5 April 2017. [Google Scholar]
  15. Yadav, D. Unraveling the effect of textured contact lenses on iris recognition. IEEE Trans. Inf. Forensics Secur. 2014, 9, 851–862. [Google Scholar] [CrossRef]
  16. Chen, C.; Antitza, D.; Arun, R. Automatic facial makeup detection with application in face recognition. In Proceedings of the 2013 International Conference on Biometrics (ICB), Madrid, Spain, 4–7 June 2013. [Google Scholar]
  17. Hu, J. Makeup-robust face verification. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 October 2013. [Google Scholar]
  18. Guo, G.; Wen, L.; Yan, S. Face authentication with makeup changes. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 814–825. [Google Scholar]
  19. He, X.; An, S.; Shi, P. Statistical texture analysis-based approach for fake iris detection using support vector machines. In Proceedings of the International Conference on Biometrics, Seoul, Korea, 27–29 August 2007; Springer: Berlin, Germany, 2007. [Google Scholar]
  20. Eskandari, M.; Önsen, T. Selection of optimized features and weights on face-iris fusion using distance images. Comput. Vis. Image Underst. 2015, 137, 63–75. [Google Scholar] [CrossRef]
  21. Wei, Z. Counterfeit iris detection based on texture analysis. In Proceedings of the 19th International Conference on Pattern Recognition 2008 ICPR, Tampa, FL, USA, 8–11 December 2008. [Google Scholar]
  22. Zhang, H.; Zhenan, S.; Tieniu, T. Contact lens detection based on weighted LBP. In Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010. [Google Scholar]
  23. Chang, C.-C.; Chih-Jen, L. LIBSVM: A library for support vector machines. ACM Trans. Int. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  24. Chen, C.; Antitza, D.; Arun, R. An ensemble of patch-based subspaces for makeup-robust face recognition. Inf. Fusion 2016, 32, 80–92. [Google Scholar] [CrossRef]
  25. Ahonen, T.; Abdenour, H.; Matti, P. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Int. 2006, 28, 2037–2041. [Google Scholar] [CrossRef] [PubMed]
  26. Tapia, J.E.; Claudio, A.P.; Kevin, W.B. Gender classification from iris images using fusion of uniform local binary patterns. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–9 September 2014; Springer: Cham, Switzerland, 2014. [Google Scholar]
  27. Määttä, J.; Abdenour, H.; Matti, P. Face spoofing detection from single images using micro-texture analysis. In Proceedings of the 2011 International Joint Conference on Biometrics (IJCB), Washington, DC, USA, 11–13 October 2011. [Google Scholar]
  28. Zhitao, X. Research on log Gabor wavelet and its application in image edge detection. In Proceedings of the 2002 6th International Conference on Signal Processing, Beijing, China, 26–30 August 2002; Volume 1. [Google Scholar]
  29. Kittler, J. On combining classifiers. IEEE Trans. Pattern Mach. Int. 1998, 20, 226–239. [Google Scholar] [CrossRef]
  30. Toh, K.-A.; Jaihie, K.; Sangyoun, L. Biometric scores fusion based on total error rate minimization. Pattern Recognit. 2008, 41, 1066–1082. [Google Scholar] [CrossRef]
  31. Liau, H.F.; Dino, I. Feature selection for support vector machine-based face-iris multimodal biometric system. Expert Syst. Appl. 2011, 38, 11105–11111. [Google Scholar] [CrossRef]
  32. Wang, X.; Chandra, K. A new approach for face recognition under makeup changes. Signal and Information. In Proceedings of the 2015 IEEE Global Conference on Processing (GlobalSIP), Orlando, FL, USA, 14–26 December 2015. [Google Scholar]
  33. Kose, N.; Ludovic, A.; Jean-Luc, D. Facial makeup detection technique based on texture and shape analysis. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015; Volume 1. [Google Scholar]
Figure 1. Block diagram of proposed cosmetic detection scheme.
Figure 1. Block diagram of proposed cosmetic detection scheme.
Symmetry 10 00122 g001
Table 1. Total number of samples available for train and test datasets of face and iris biometrics.
Table 1. Total number of samples available for train and test datasets of face and iris biometrics.
Train Individuals (No-Cosmetic/Cosmetic)Test Individuals (No-Cosmetic/Cosmetic)
Face Dataset75 (150/150)75 (150/150)
Iris dataset50 (100/200)50 (100/200)
Table 2. Classification and total error rates of different methods for face makeup detection.
Table 2. Classification and total error rates of different methods for face makeup detection.
MethodsCR (%)TER (%)
LBP60.0011.02
L-Gabor64.0010.90
Color-Moments69.679.52
Ov-Color-Moments73.677.38
Ov-LBP74.007.30
multiS-LBP (Global)79.676.83
multiS-Ov-LBP (Local)82.676.01
multiS-Ov-LBP + Ov-Color-Moments86.005.16
Proposed Scheme91.673.18
Table 3. Classification and total error rates of different methods for iris transparent and color contact lens detection.
Table 3. Classification and total error rates of different methods for iris transparent and color contact lens detection.
MethodsTransparent Contact Lens (Soft)Color Cosmetic Contact Lens (Texture)
CR (%)TER (%)CR (%)TER (%)
LBP55.0014.0056.5013.33
L-Gabor58.5013.5658.5013.04
Color-Moments49.0017.8162.5010.96
Ov-Color-Moments50.0517.0164.509.62
Ov-LBP62.5011.3966.009.02
multiS-LBP (Global)62.5011.0067.508.93
multiS-Ov-LBP (Local)66.509.9871.008.57
multiS-Ov-LBP + Ov-Color-Moments68.009.1074.007.74
Proposed Scheme71.508.8378.506.80
Table 4. Classification rates (%) for face and iris datasets using several state-of-the-art techniques.
Table 4. Classification rates (%) for face and iris datasets using several state-of-the-art techniques.
MethodsFace MakeupTransparent Contact Lens (Soft)Color Cosmetic Contact Lens (Texture)
HOG + SVM [17]52.0049.0051.50
SIFT + SVM [17]54.3450.5051.50
CCA + SVM [17]63.0060.0057.00
CCA + PLS +SVM [32]66.6760.0061.50
LBP + Gabor +GIST + EOH + Color-Moments + SVM [16]87.0058.5063.00
LGBP + HOG + SVM [33]91.3470.6776.00
Proposed Scheme91.6771.5078.50

Share and Cite

MDPI and ACS Style

Sharifi, O.; Eskandari, M. Cosmetic Detection Framework for Face and Iris Biometrics. Symmetry 2018, 10, 122. https://doi.org/10.3390/sym10040122

AMA Style

Sharifi O, Eskandari M. Cosmetic Detection Framework for Face and Iris Biometrics. Symmetry. 2018; 10(4):122. https://doi.org/10.3390/sym10040122

Chicago/Turabian Style

Sharifi, Omid, and Maryam Eskandari. 2018. "Cosmetic Detection Framework for Face and Iris Biometrics" Symmetry 10, no. 4: 122. https://doi.org/10.3390/sym10040122

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop