Next Article in Journal
The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence
Previous Article in Journal
Certain Concepts in Intuitionistic Neutrosophic Graph Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Face Classification Using Color Information

by
Atul Sajjanhar
* and
Ahmed Abdulateef Mohammed
School of Information Technology, Deakin University, Geelong, VIC 3216, Australia
*
Author to whom correspondence should be addressed.
Information 2017, 8(4), 155; https://doi.org/10.3390/info8040155
Submission received: 29 September 2017 / Revised: 26 October 2017 / Accepted: 23 November 2017 / Published: 26 November 2017
(This article belongs to the Section Information and Communications Technology)

Abstract

:
Color models are widely used in image recognition because they represent significant information. On the other hand, texture analysis techniques have been extensively used for facial feature extraction. In this paper; we extract discriminative features related to facial attributes by utilizing different color models and texture analysis techniques. Specifically, we propose novel methods for texture analysis to improve classification performance of race and gender. The proposed methods for texture analysis are based on Local Binary Pattern and its derivatives. These texture analysis methods are evaluated for six color models (hue, saturation and intensity value (HSV); L*a*b*; RGB; YCbCr; YIQ; YUV) to investigate the effect of each color model. Further, we configure two combinations of color channels to represent color information suitable for gender and race classification of face images. We perform experiments on publicly available face databases. Experimental results show that the proposed approaches are effective for the classification of gender and race.

1. Introduction

Face images contain information which is useful for face classification [1]. Face may be classified on the basis of race, gender, age, or expression. Face classification has attracted the attention of many researchers due to a wide range of applications such as security, surveillance, identity verification and video indexing [2,3,4,5]. The human face image is rich with demographic information that can be utilized to classify face images accordingly.
In this work, gender and race attributes are used for face classification. Demographic prediction of gender and race has been previously studied. Han and Jain [6] and Han et al. [1] classified face images according to demographic information using biologically inspired features (BIF). BIF uses Gabor filters to extract the representative facial features. Farinella and Dugelay [7] classified face images according to race and gender using three techniques, namely, Pixel-Based (PB), Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG). Demarkus et al. [8] presented their work for race and gender classification in video sequence images using two techniques, namely, Pixel-Based (PB) and Biologically Inspired Model (BIM). Wang et al. [9] utilized Local Circular Pattern (LCP), which is based on Local Binary Pattern (LBP) for gender classification.
On the other hand, color models have been used in face recognition. Anbarjafari [10] applied the HSV (hue, saturation and intensity) and YCbCr (Y is luminance, and Cb, Cr are chrominance) color models to local binary pattern for face recognition. He found that HSV gave better results than YCbCr. Shih and Liu [11] performed a comparison of 12 color models for face recognition. Their experiments have shown that Y and U in the YUV (Y is luminance, and U, V are chrominance) color model, and Y and I in YIQ (Y is luminance, and I, Q are chrominance) color model can improve the face recognition performance. Liu and Liu [12] proposed Discrete Cosine Feature (DCF) for face recognition. In DCF, representative facial features are obtained by applying Discrete Cosine Transform (DCT) on the YIQ color model. Experimental results showed that the DCF enhances the face recognition performance. Choi, Ro and Plataniotis [13] proposed two new feature extraction methods for face recognition, namely, color local Gabor wavelets and color local binary pattern. These two methods are applied to two color models, namely, RCrQ (R is luminance, and Cr, Q are chrominance) [12] and normalized ZRG (R is luminance, and Z, G are chrominance) [13]. Experimental results showed that these approaches are competitive for face classification. Kim and Ro [14] presented face recognition method utilizing multiple color spaces and deep learning. Their results have shown an improvement in face recognition accuracy. A comparison of different color models has shown that best results are obtained by L*a*b* followed by YCbCr and RIQ color models. In general, from the aforementioned studies, it is observed that color images can significantly improve face recognition. This has motivated us to investigate face classification based on gender and race attributes using different color models and different texture analysis techniques.
In this paper, we propose novel texture analysis approaches for face classification and test these approaches for different color models. Our proposed approaches are evaluated for gender and race classification of face images. This paper is organized as follows: in Section 2, we explain existing techniques for texture analysis. Commonly used color models are explained in Section 3. Details of our proposed approaches for texture analysis are given in Section 4. Section 5 gives details of our experimental setup. Experimental results are presented in Section 6. Discussion of results is given in Section 7. Finally, we conclude the paper in Section 8.

2. Texture Analysis

Local Binary Pattern (LBP) [15] is a reliable technique for texture analysis and it is widely used in face classification. In this section, we describe Local Binary Pattern (LBP) [15] and its variants, namely, Compound Local Binary Pattern (CLBP) [13] and Non-Redundant Local Binary Pattern (NRLBP) [16] which are effective approaches for texture analysis.

2.1. Local Binary Pattern (LBP)

Local Binary Pattern (LBP) was originally proposed by Ojala et al. [15]. LBP is widely used to describe texture. The LBP operator uses the 3 × 3 neighborhood of each pixel in the image, thresholding the intensity value of the neighbors with the intensity value of the center pixel to produce a binary series. The neighbors are defined in a circle starting from the top pixel and moving clockwise. Hence, if the intensity value of the neighbor pixel is greater than or equal to the intensity value of the center pixel, it is assigned 1, otherwise it is assigned 0. The binary string generated for each pixel is then converted to a decimal value. A decimal value is used to represent each image pixel. The histogram of the decimal values is used to describe texture. The decimal value of the LBP code of a pixel c at position (xc, yc) is mathematically described as follows [10].
LBP P , R ( x c , y c ) = n = 1 P s ( I n I c ) 2 P
s ( y ) = { 1 ,     i f    y 0 0     o t h e r w i s e
where, P is the number of pixels in the neighborhood, R is the radius around the center pixel, Ic is the intensity of the pixel c, and In is the intensity of the neighboring pixels.
Subsequently, two extensions were developed for the basic LBP operator [17]. In the first extension, the LBP operator is developed to have different sizes of neighborhoods. In the second extension, the uniform LBP is defined; in this context, LBP is defined as uniform if there are no more than two bitwise transitions from 1 to 0, or vice-versa, when the binary string is considered as a circular string [17]. Accordingly, uniform LBP operator with P neighbors and R radius is referred to as LBP P , R u 2 . For example, 00011000 and 1110011 are considered uniform patterns. The number of uniform patterns in LBP operator covering eight neighbors (which is referred to as LBP u 2 ) is 59, while for 16 neighbors it is 243 [17].

2.2. Compound Local Binary Pattern (CLBP)

Compound Local Binary Pattern (CLBP) [18] is one of LBP derivatives. The CLBP operator is generated by considering the sign and magnitude components of the center pixel surrounded by 3 × 3 neighbors. The sign and magnitude components are derived from the Most Significant Bit (MSB) and the Least Significant Bit (LSB), as shown in Equations (3) and (4), respectively [18].
CBLP sign = { 1 , i f   l n l c > 0 0 , o t h e r w i s e }
CBLP mag = { 1 , i f   l n l c > M a v g 0 ,   o t h e r w i s e }
M a v g = ( | m a g 1 | + | m a g 2 | + + | m a g 8 | ) / 8
where, ln and lc are pixel intensities for neighboring pixels and the center pixel. mag1 to mag8 refer to the magnitude values of the difference between ln and lc.

2.3. Non-Redundant Local Binary Pattern (NRLBP)

Nguyen et al. [16] proposed a new texture analysis technique, namely, Non-Redundant Local Binary Pattern (NRLBP). The new derivative of LBP operator, which is NRLBP, is defined as follows.
NRLBP P , R ( x c , y c ) = min { LBP P , R ( x c , y c ) , 2 P 1 LBP P , R ( x c , y c ) }
NRLBP considers both LBP code and its complement. If the LBP code is (11001001), equal to 202 in decimal, then the complement is (00110110), which is 54 in decimal. The NRLBP operator will return the lower of the two values. Nguyen et al. [16] found that NRLBP has more discriminative power than the original LBP for object detection.

3. Color Models

In this section, we briefly describe the color models considered in this paper. The RGB color model is used to store color images, and can be converted to other color models. The RGB model is sensitive to luminance and other ambient conditions [11].
HSV (hue, saturation and intensity value) model is used in [10,11] for face recognition; in [10], it is found that HSV gave better results than YCbCr. Hue (H) and saturation (S) represent chrominance, whereas value (V) represents luminance. The HSV color model is defined as below [11].
mx = max(R,G,B); mn = min(R,G,B); df = mxmn
H = { 60   ( G B d f ) ,   i f   m x = R 60   ( B R d f + 2 ) ,   i f   m x = G 60   ( R G d f + 4 ) ,   i f   m x = B n o t   d e f i n e d ,   i f   m x = 0 }
H is in the range of [0, 360]. So, H = H + 360 if H < 0.
S = { d f m x ,   i f m x 0 0 ,   i f m x = 0 }
V = m x
YUV and YIQ have shown good results in face recognition in [11,12]. The YIQ color model is used by the NTSC (National Television System Committee) video standard, whereas the YUV color model is adopted by PAL (Phase Alternation by Line) and SECAM (System Electronique Couleur Avec Memoire). YUV and YIQ are defined as below [19].
[ Y U V ] = [ 0.2990 0.5870 0.1140 0.1471 0.2888 0.4359 0.6148 0.5148 0.1000 ]    [ R G B ]
[ Y I Q ] = [ 0.2990 0.5870 0.1140 0.5957 0.2745 0.3213 0.2115 0.5226 0.3111 ]   [ R G B ]
The YCbCr model is a scaled and offset version of the YUV color model. YCbCr is used in [10,11,14] for face recognition. In [10], YCbCr is found to be effective; in [8], it is compared with 12 color models and found to be competitive; in [14], YCbCr gave favorable results. YCbCr was developed from RGB for digital video standards and television transmissions. YCbCr is obtained by dividing the RGB color channels into a luminance channel (Y) and two chrominance channels, namely, blue (Cb) and red (Cr). YCbCr is defined as below [11].
[ Y Cb Cr ] = [ 16 128 128 ] + [ 65.4810 128.5530 24.9660 37.7745 74.1592 111.9337 111.9581 93.7509 18.2072 ] [ R G B ]
The L*a*b* model is used in [11,14]. It gave the best results in [14] followed by YCbCr. L*a*b* is derived from the XYZ tri-stimulus [11] as below.
[ X Y Z ] = [ 0.607 0.174 0.200 0.299 0.587 0.114 0.000 0.066 1.116 ] [ R G B ]
The L* color channel corresponds to brightness and has a range [0, 100]. The a* color channel is a measure of red (positive values) or green (negative values), and the b* color channel is a measure of yellow (positive values) or blue (negative values). Based on XYZ tri-stimulus, the L*a*b* color model is defined as below [11].
L = 116 f ( Y Y n ) 16
a = 500 [ f ( X X n ) f ( Y Y n ) ]
b = 200 [ f ( Y Y n ) f ( Z Z n ) ]
where
f ( x ) = { x 1 3 i f     x > 0.008856 7.787 x + 16 116 o t h e r w i s e }

4. Proposed Methods

In this section, we describe our proposed method for face classification. The motivation of the proposed method is to transform the face image into polar space. The image in polar space is used for feature extraction of face attributes. Polar raster sampled images have been previously used in different pattern recognition approaches including face recognition. In the generic Fourier descriptor (GFD) [20], the Fourier transform is applied to polar raster sampled image. Oh and Kwak [21] utilized polar coordinates for face recognition. In their proposed method, representative facial features are extracted by converting the face image from Cartesian coordinates to polar coordinates. Linear discriminant analysis (LDA) algorithm is then applied to project the face image in polar coordinates into smaller size. Bhattacharjee [22] introduced face recognition system based on adaptive polar transform for visual and thermal images. In this approach, discrete wavelet transform using Daubachies-4 mother wavelet is used to decompose the polar visual and thermal images. Song and Li [23] proposed a method for local feature extraction, namely, Local Polar DCT Features (LPDF). The DCT coefficients are then rearranged in a zigzag scanning order (from low frequency to high frequency) yielding a one-dimensional feature vector.
The motivation of using polar coordinates for feature extraction is that images in polar coordinates are robust in relation to translation, scaling, and rotation [20,21,22,23]. The Cartesian coordinates (x,y) are converted to polar coordinates (r,θ) as follows.
r = ( x x c ) 2 + ( y y c ) 2
θ = arctan ( y y c x x c )
where (xc, yc) represents the centroid of the image. Figure 1 shows a face image in Cartesian space which is transformed into polar space.
The representative facial features are extracted from a polar-transformed image (as shown in the figure above) by using texture analysis techniques, discussed in Section 2. We refer to LBP, CLBP and NRLBP, applied in polar space, as PLBP, PCLBP and P-NRLBP, respectively. Further, the features are extracted for each color model discussed in Section 3. Hence, face images from the FERET database are converted from the RGB color model to other color models, namely, HSV, L*a*b*, YCbCr, YIQ, YUV and YCbCr. Representative facial features are extracted for each color model by applying the LBP-based texture analysis techniques in Cartesian and polar space. Texture features are extracted from a face image by dividing it into 7 × 7 blocks and applying LBP (or variants) to each block. The features for each block are concatenated to generate the feature vector. The dimensionality of the feature vector is reduced by applying principal component analysis (PCA) [24]. Assume v1, v2 and v3 are the feature vectors for the three color channels of a color model, the normalized feature vector vnorm is produced as follows.
v n o r m = ( f 1 ,   f 2 ,   f n )
where fi is generated by normalizing corresponding elements xi, yi and zi within v1, v2 and v3, as follows.
f i = ( x i μ 1 / δ 1 ,   y i μ 2 / δ 2 ,   z i μ 3 / δ 3 )
where μj and δj refer to the mean and standard deviation of the feature vector vj.

5. Experimental Setup

We perform experiments to classify face images according to two facial attributes, namely, gender and race. The experiments are performed utilizing different color models: RGB; HSV; L*a*b*; YCbCr; YIQ, and; YUV. We evaluate the proposed texture analysis methods for classification of face images obtained from the FERET database [25]. The attributes (labels), along with their related classes, are shown in Table 1.
Experiments are performed using face images in the FERET database. The FERET database, preprocessing and the feature extraction are described in the following subsections.

5.1. Database

Public face databases are used for experiments. The FERET database [25] is used for gender and race classification. The PICS database [26] is used for gender classification. The databases are described below.
The FERET database includes 14,126 face images for 1199 subjects. Images in the FERET database include pose variations (frontal images and rotated images with different degrees including profile images with 90 degrees). Our experiments are applied to 130 subjects, five images each, with a total number of 650 images. We perform our experiments on images from this database, which includes labels for race and gender. In addition to pose variation, images in the FERET database have another challenge, namely, occlusion created by glasses, beards, and moustaches.
The PICS database is used for gender classification only as it contains labels for gender. PICS is a collection of face images organized into sets. We used three sets from the PICS database, namely, Aberdeen, Pain, and Utrecht ECVP, which comprise color images. The Aberdeen set includes 687 color images of 90 individuals (with different number of images for each individual). This set has variations in lighting condition and various viewpoints (22 degrees, 45 degrees, 67 degrees and 90 degrees). The Pain set includes 599 color images of 23 individuals (13 women and 10 men) with different facial expressions. In addition to expression variations, this set also includes face images with 45 degrees and 90 degrees rotation. The Utrecht ECVP set consists of 131 images of 69 individuals (49 men and 20 women). In total, we used 913 images from this database.

5.2. Preprocessing

Before discriminative facial features are extracted and classified, the face images are subjected to a preprocessing stage in order to prepare these images for the next stage, which is the feature extraction stage. In most databases, the face images are preprocessed in order to better discriminate the features obtained. Hence, better and more realistic results are achieved. The preprocessing stage itself includes two steps. First, image cropping: most face images include facial and non-facial regions. This step is necessary to keep the desired facial area and discard the unwanted area. Second, image resizing: cropping may result in cropped face images with different sizes. A uniform size of the face images is necessary prior to the feature extraction stage. The images in Cartesian coordinates are resized to a fixed size of 128 × 128 pixels.

5.3. Feature Extraction

After preprocessing, representative features are extracted from face images. In our experiments, the features are extracted using texture analysis approach described in Section 2. For feature extraction, preprocessed face images represented in RGB color model and Cartesian coordinates are transformed into polar space, as described in Section 4. Texture features are extracted by applying LBP to each channel of the images in polar space. The process of feature extraction is repeated for different color models, namely, HSV, L*a*b*, YCbCr, YIQ, and YUV, as described in Section 3.
Figure 2 shows Y, Cb, Cr channels (of the YCbCr color model) of a face image which is transformed into polar space.
The Cr color channel of Figure 2a is transformed into polar space in Figure 3a. The LBP codes of Figure 3a utilizing eight neighbors (P = 8) and different pixel radius (R = 1, 2, 3) are shown in Figure 3b–d.
In our experiments, we divide the face images into non-overlapped blocks. More specifically, we divided the face images (in polar space) into 7 × 7 = 49 blocks. Histograms are then created for each block from the texture features. The number of histograms for each block utilizing eight neighbors is 59. Dividing the image into a large number of small blocks yields long feature vectors and slow classification, while dividing into small number of large blocks causes loss of spatial information [17]. We consider eight neighbors (P = 8) and different pixel radii (R = 1, 2, 3) in all experiments. Hence, applying three texture analysis operators utilizing eight neighbors yields 7 × 7 blocks × 3 operators × 59 = 8673 features. The texture features for three color channels are normalized and concatenated, as described in Section 4. The feature vector is reduced using PCA algorithm [24]. In previous work [27], the authors evaluated different features lengths and found that 150 features (50 features for each color channel) was a good tradeoff between classification accuracy, computational effort, and storage requirements.

6. Experimental Results

The face images are represented using different texture analysis techniques (operators) including LBP, PLBP, CLBP, PCLBP, NRLBP, and P-NRLBP. These operators are applied on face images in Cartesian coordinates (LBP, CLBP, and NRLBP) and polar coordinates (PLBP, PCLBP, and P-NRLBP), taking into account eight neighbors (P = 8) and different pixel radii (R = 1, 2, 3). Feature vectors that describe the face images are then projected into 150 features using the PCA algorithm as a trade-off.
Classification is performed by using SVM classifier [28] with poly kernel function and KNN classifier [29] with number of neighbors (K = 1) in Weka 3.8.1. Feature extraction is performed using MATLAB R2014a. Our results are obtained by utilizing a tenfold cross-validation strategy. The proposed texture analysis operators, namely, PLBP, PCLBP and P-NRLBP, are evaluated and compared with the existing operators, namely, LBP, CLBP and NRLBP. The approaches are evaluated for gender and race classification of face images. Moreover, the evaluation is performed for each of the six color models defined in Section 3. The face images are obtained from the FERET database and PICS database. Classification of face images is performed using two commonly used single-label classification models, namely, SVM and KNN. Gender and race classification results for the FERET database are shown in Figure 4, Figure 5, Figure 6 and Figure 7.
Gender classification results for the PICS database are shown in Figure 8 and Figure 9.
We found that the most effective color channels are V (in HSV), G (in RGB), Cb (in YCbCr), Y (in YIQ), and Y (in YUV). Therefore, we configured two new combinations of color channels to represent color information: YVCb composed of Y (from YUV); V (from HSV) and Cb (from YCbCr); YVG composed of Y (from YUV); V (from HSV), and; G (from RGB). Table 2, Table 3, Table 4 and Table 5 show classification results for the proposed color information.

7. Discussion

In this section, we discuss the results obtained from the experiments for gender and race classification.
From the results shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 and Table 2, Table 3, Table 4 and Table 5, generally better results are obtained for texture analysis using our proposed methods, namely, PLBP, PCLBP and P-NRLBP. Statistically, we found that our proposed methods achieved improvement in classification rates for 89.81% of the cases.
For the FERET database, the highest classification rate for gender attribute is achieved by YVG using PCLBP and SVM. The corresponding confusion matrix is shown in Table 6. For the PICS database, the highest classification rate for gender attribute is achieved by YUV using P-NRLBP and KNN. The corresponding confusion matrix is shown in Table 7.
Table 6 shows that females in the FERET database have higher misclassification as males compared with males being misclassified as females. This is attributed to three reasons. First, there are a small number of female images in the database with 430 males and 220 females, resulting in overfitting. Second, many female images have a large area of the face obscured by hair. Third, many images are non-frontal images which reduces effectiveness of the features. On the other hand, Table 7 shows that gender misclassification is small for the PICS database because the training set is larger; also, we used mostly frontal face images from the Aberdeen and Utrech sets, and small number of non-frontal images from the Pain set.
The highest classification rate for race classification of FERET database is achieved for L*a*b* using PCLBP and SVM. The corresponding confusion matrix is shown in Table 8.
From Table 8, we observe the gender classification is significantly improved compared with traditional color models. Europeans have the highest accuracy amongst all races because of the distinctive skin texture. However, there is a high rate of misclassification compared with gender classification. This is attributed to a number of factors. First, the misclassification is effected by facial hair, including beards and moustaches. Second, the illumination variation of facial images contributes to misclassification because it compromises effective texture representation. Third, skin texture itself is not sufficient to distinguish between races: the distinctive features of East Asians is the shape of the eyes and nose; and the distinctive feature of Africans is the skin color. Hence, incorporating shape and color features is likely to further improve race classification.

8. Conclusions

In this paper, we presented new methods for representing gender and race facial attributes. We proposed texture analysis techniques based on LBP and its derivatives, namely, CLBP and NRLBP. The proposed methods are motivated by the effectiveness of polar raster sampled images for feature extraction. We evaluated the proposed methods by performing experiments for the single-label classification of face images. We investigated several color models for classification of face images using SVM and KNN classifiers. Experimental results show that, in most cases, the proposed methods exhibited an improvement in classification results. Furthermore, two combinations of color channels are configured and evaluated. The newly proposed approach showed significant improvement in the classification results of gender attributes.

Author Contributions

A.S. and A.M. conceived and designed the experiments; A.M. performed the experiments; A.S. and A.M. analyzed the data; A.S. and A.M. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han, H.; Otto, C.; Liu, X.; Jain, A.K. Demographic estimation from face images: Human vs. machine performance. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1148–1161. [Google Scholar] [CrossRef] [PubMed]
  2. Jafri, R.; Arabnia, H.R. A survey of face recognition techniques. J. Inf. Process. Syst. 2009, 5, 41–68. [Google Scholar] [CrossRef]
  3. Jafri, R.; Arabnia, H.R. Fusion of Face and Gait for Automatic Human Recognition. In Proceedings of the International Conference on Information Technology—New Generations, Las Vegas, NV, USA, 7–8 April 2008; pp. 167–173. [Google Scholar]
  4. Jafri, R.; Arabnia, H.R. Analysis of Subspace-Based Face Recognition Techniques under Changes in Imaging Factors. In Proceedings of the IEEE International Conference on Information Technology—New Generations (ITNG 2007), Las Vegas, NV, USA, 2–4 April 2007; pp. 406–413. [Google Scholar]
  5. Jafri, R.; Arabnia, H.R.; Simpson, K.J. An Integrated Face-Gait System for Automatic Recognition of Humans. In Proceedings of the International Conference on Security & Management, Las Vegas, NV, USA, 13–16 July 2008; pp. 580–591. [Google Scholar]
  6. Han, H.; Jain, A.K. Age, Gender and Race Estimation from Unconstrained Face Images; MSU Technical Report (MSU-CSE-14–5); Department Computer Science Engineering, Michigan State University: East Lansing, MI, USA, 2014. [Google Scholar]
  7. Farinella, G.; Dugelay, J.L. Demographic classification: Do gender and ethnicity affect each other? In Proceedings of the IEEE International Conference on Informatics, Electronics & Vision (ICIEV), Dhaka, Bangladesh, 18–19 May 2012; pp. 383–390. [Google Scholar]
  8. Demirkus, M.; Garg, K.; Guler, S. Automated person categorization for video surveillance using soft biometrics. In Proceedings of the SPIE Defense, Security, and Sensing, International Society for Optics and Photonics, Orlando, FL, USA, 5–9 April 2010; p. 76670P. [Google Scholar]
  9. Wang, C.; Huang, D.; Wang, Y.; Zhang, G. Facial image-based gender classification using local circular patterns. In Proceedings of the 21st IEEE International Conference on Pattern Recognition, Tsukuba, Japan, 11–15 November 2012; pp. 2432–2535. [Google Scholar]
  10. Anbarjafari, G. Face recognition using color local binary pattern from mutually independent color channels. EURASIP J. Image Video Process. 2013, 2013, 1–11. [Google Scholar] [CrossRef]
  11. Shih, P.; Liu, C. Comparative assessment of content-based face image retrieval in different color spaces. Int. J. Pattern Recognit. Artif. Intell. 2005, 19, 873–893. [Google Scholar] [CrossRef]
  12. Liu, Z.; Liu, C. Fusion of color, local spatial and global frequency information for face recognition. Pattern Recognit. 2010, 43, 2882–2890. [Google Scholar] [CrossRef]
  13. Choi, J.Y.; Ro, Y.M.; Plataniotis, K.N. Color local texture features for color face recognition. IEEE Trans. Image Process. 2012, 21, 1366–1380. [Google Scholar] [CrossRef] [PubMed]
  14. Kim, H.I.; Ro, Y.M. Collaborative facial color feature learning of multiple color spaces for face recognition. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 1669–1673. [Google Scholar]
  15. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  16. Nguyen, D.T.; Zong, Z.; Ogunbona, P.; Li, W. Object detection using non-redundant local binary patterns. In Proceedings of the 17th IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 4609–4612. [Google Scholar]
  17. Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2004; pp. 469–481. [Google Scholar]
  18. Sujatha, B.M.; Babu, K.S.; Raja, K.B.; Venugopal, K.R. Hybrid domain based face recognition using DWT, FFT and compressed CLBP. Int. J. Image Process. 2015, 9, 283–303. [Google Scholar]
  19. Plataniotis, K.; Venetsanopoulos, A.N. Color Image Processing and Applications; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
  20. Zhang, D.; Lu, G. Generic Fourier descriptor for shape-based image retrieval. In Proceedings of the International Conference on Multimedia and Expo, Atlanta, GA, USA, 26–29 August 2002; Volume 1, pp. 425–428. [Google Scholar]
  21. Oh, J.H.; Kwak, N. Robust Face Recognition under the Polar Coordinate System. 2016. Available online: http://mipal.snu.ac.kr/images/2/2b/IPC4052.pdf (accessed on 25 May 2017).
  22. Bhattacharjee, D. Adaptive polar transform and fusion for human face image processing and evaluation. In Human-Centric Computing and Information Sciences, Article 4; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  23. Song, T.; Li, H. Local polar DCT features for image description. IEEE Signal Process. Lett. 2013, 20, 59–62. [Google Scholar] [CrossRef]
  24. Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef] [PubMed]
  25. Phillips, P.J.; Moon, H.; Rizvi, S.A.; Rauss, P.J. The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1090–1104. [Google Scholar] [CrossRef]
  26. Psychological Image Collection at Stirling (PICS). Available online: pics.stir.ac.uk (accessed on 29 May 2016).
  27. Mohammed, A.A.; Sajjanhar, A. Robust approaches for multi-label face classification. In Proceedings of the IEEE International Conference on Digital Image Computing: Techniques and Applications, Gold Coast, Australia, 30 November–2 December 2016; pp. 275–280. [Google Scholar]
  28. Hsu, C.W.; Chang, C.C.; Lin, C.J. A Practical Guide to Support Vector Classification. 2003, pp. 1–16. Available online: http://www.datascienceassn.org/sites/default/files/Practical%20Guide%20to%20Support%20Vector%20Classification.pdf (accessed on 13 May 2017).
  29. Boiman, O.; Shechtman, E.; Irani, M. In defense of nearest-neighbor based image classification. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
Figure 1. Face image in: (a) Cartesian coordinates; (b) polar coordinates.
Figure 1. Face image in: (a) Cartesian coordinates; (b) polar coordinates.
Information 08 00155 g001
Figure 2. (a) RGB face image; (b) Y channel; (c) Cb channel; (d) Cr channel.
Figure 2. (a) RGB face image; (b) Y channel; (c) Cb channel; (d) Cr channel.
Information 08 00155 g002
Figure 3. (a) Polar image of Cr color channel of Figure 2a; (b) Local Binary Patterns (LBP) (P = 8, R = 1); (c) LBP (P = 8, R = 2), and; (d) LBP (P = 8, R = 3).
Figure 3. (a) Polar image of Cr color channel of Figure 2a; (b) Local Binary Patterns (LBP) (P = 8, R = 1); (c) LBP (P = 8, R = 2), and; (d) LBP (P = 8, R = 3).
Information 08 00155 g003
Figure 4. Gender classification using SVM (FERET database).
Figure 4. Gender classification using SVM (FERET database).
Information 08 00155 g004
Figure 5. Gender classification using KNN (FERET database).
Figure 5. Gender classification using KNN (FERET database).
Information 08 00155 g005
Figure 6. Race classification using SVM (FERET database).
Figure 6. Race classification using SVM (FERET database).
Information 08 00155 g006
Figure 7. Race classification using KNN (FERET database).
Figure 7. Race classification using KNN (FERET database).
Information 08 00155 g007
Figure 8. Gender classification using KNN (PICS database).
Figure 8. Gender classification using KNN (PICS database).
Information 08 00155 g008
Figure 9. Gender classification using SVM (PICS database).
Figure 9. Gender classification using SVM (PICS database).
Information 08 00155 g009
Table 1. Facial Attributes and Classes.
Table 1. Facial Attributes and Classes.
LabelsClasses
GenderMale, Female
RaceEuropean, African, Middle Eastern, South Asian, East Asian, and Hispanic
Table 2. Classification Rates for Gender and Race Attributes for YVCb (FERET database). Local Binary Pattern (LBP); Compound Local Binary Pattern (CLBP); Non-Redundant Local Binary Pattern (NRLBP).
Table 2. Classification Rates for Gender and Race Attributes for YVCb (FERET database). Local Binary Pattern (LBP); Compound Local Binary Pattern (CLBP); Non-Redundant Local Binary Pattern (NRLBP).
ApproachGender ClassificationRace Classification
SVMKNNSVMKNN
LBP90.1590.4668.7675.53
PLBP92.9291.6979.3879.69
CLBP89.8489.8466.4675.07
PCLBP92.3091.6975.2380.30
NRLBP88.9290.0068.7680.00
P-NRLBP92.3092.0080.9282.46
Table 3. Classification Rates for Gender and Race Attributes for YVG (FERET database).
Table 3. Classification Rates for Gender and Race Attributes for YVG (FERET database).
ApproachGender ClassificationRace Classification
SVMKNNSVMKNN
LBP90.0090.1569.3875.23
P-LBP91.6992.1579.3879.07
CLBP89.8489.0767.2375.38
PCLBP94.0091.2374.3080.61
NRLBP88.9290.3069.2380.15
P-NRLBP92.3091.8480.9282.46
Table 4. Classification Rates for Gender for YVCb (PICS database).
Table 4. Classification Rates for Gender for YVCb (PICS database).
ApproachGender Classification
SVMKNN
LBP91.7896.60
PLBP95.1898.79
CLBP91.0196.27
PCLBP94.6398.46
NRLBP91.2396.27
P-NRLBP95.8398.24
Table 5. Classification Rates for Gender for YVG (PICS database).
Table 5. Classification Rates for Gender for YVG (PICS database).
ApproachGender Classification
SVMKNN
LBP91.6796.38
PLBP95.1898.24
CLBP90.7996.16
PCLBP94.4198.46
NRLBP91.2396.16
P-NRLBP95.7298.13
Table 6. Confusion Matrix of Gender Attribute for YVG Using PCLBP and SVM (FERET database).
Table 6. Confusion Matrix of Gender Attribute for YVG Using PCLBP and SVM (FERET database).
MaleFemale
male97.172.82
female1288
Table 7. Confusion Matrix of Gender Attribute for YUV Using P-NRLBP and KNN (PICS database).
Table 7. Confusion Matrix of Gender Attribute for YUV Using P-NRLBP and KNN (PICS database).
MaleFemale
male99.130.87
female1.5498.45
Table 8. Confusion Matrix of Race Attribute for L*a*b* Using PCLBP and SVM.
Table 8. Confusion Matrix of Race Attribute for L*a*b* Using PCLBP and SVM.
EuropeAfricanMiddle EasternSouth AsianEast AsianHispanic
European95.550.631.580.631.260.31
African21.6671.6651.6600
Middle Eastern15376213
South Asian5.712.855.7171.4214.280
East Asian16.1901.902.8578.090.95
Hispanic2005.715.7111.4257.14

Share and Cite

MDPI and ACS Style

Sajjanhar, A.; Mohammed, A.A. Face Classification Using Color Information. Information 2017, 8, 155. https://doi.org/10.3390/info8040155

AMA Style

Sajjanhar A, Mohammed AA. Face Classification Using Color Information. Information. 2017; 8(4):155. https://doi.org/10.3390/info8040155

Chicago/Turabian Style

Sajjanhar, Atul, and Ahmed Abdulateef Mohammed. 2017. "Face Classification Using Color Information" Information 8, no. 4: 155. https://doi.org/10.3390/info8040155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop