Next Article in Journal
Some Convolution Formulae Related to the Second-Order Linear Recurrence Sequence
Previous Article in Journal
An Efficient Algorithm for mmWave MIMO Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Face Recognition with Triangular Fuzzy Set-Based Local Cross Patterns in Wavelet Domain

1
Department of Digital Forensics Engineering, Technology Faculty, Firat University, 23119 Elazig, Turkey
2
Département d’Informatique, Université du Québec à Montreal, 201, av. Président-Kennedy, Montréal, QC H2X 3Y7, Canada
3
Department of Computer Engineering, Shahrekord University, Shahrekord 64165478, Iran
4
Institute of Teleinformatics, Cracow University of Technology, Warszawska 24 st., F-5, 31-155 Krakow, Poland
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(6), 787; https://doi.org/10.3390/sym11060787
Submission received: 9 May 2019 / Revised: 4 June 2019 / Accepted: 6 June 2019 / Published: 13 June 2019

Abstract

:
In this study, a new face recognition architecture is proposed using fuzzy-based Discrete Wavelet Transform (DWT) and fuzzy with two novel local graph descriptors. These graph descriptors are called Local Cross Pattern (LCP). The proposed fuzzy wavelet-based face recognition architecture consists of DWT, Triangular fuzzy set transformation, and textural feature extraction with local descriptors and classification phases. Firstly, the LL (Low-Low) sub-band is obtained by applying the 2 Dimensions Discrete Wavelet Transform (2D DWT) to face images. After that, the triangular fuzzy transformation is applied to this band in order to obtain A, B, and C images. The proposed LCP is then applied to the B image. LCP consists of two types of descriptors: Vertical Local Cross Pattern (VLCP) and Horizontal Local Cross Pattern (HLCP). Linear discriminant analysis, quadratic discriminant, analysis, quadratic kernel-based support vector machine (QKSVM), and K-nearest neighbors (KNN) were ultimately used to classify the extracted features. Ten widely used descriptors in the literature are applied to the fuzzy wavelet architecture. AT&T, CIE, Face94, and FERET databases are used for performance evaluation of the proposed methods. Experimental results show that the LCP descriptors have high face recognition performance, and the fuzzy wavelet-based model significantly improves the performances of the textural descriptors-based face recognition methods. Moreover, the proposed fuzzy-based domain and LCP method achieved classification accuracy rates of 97.3%, 100.0%, 100.0%, and 96.3% for AT&T, CIE, Face94, and FERET datasets, respectively.

1. Introduction

Biometric identification systems are widely used in security-critical systems [1,2,3] and man-machine interfaces (MMI) [4]. They are used for personnel control and criminal monitoring/detection in areas such as military, hospital, airport, education [3,5,6,7]. In systems used for personnel control, persons are registered, and recognition is performed with the perception of biometric data with certain standards. However, people tend to hide themselves with criminal identification. This tendency complicates the task of criminal identification [5,6,7]. In such cases, recognition can be achieved with camera images taken in the outdoors, where the targeted person is unaware. Thus, face recognition techniques are widely used in criminal identification applications [8]. In facial recognition systems, users must adhere to the rules as they enter the system with face data. However, in some public areas, such as metros and airports, people tend to conceal themselves in recognition systems. Thus, the performance of facial recognition applications could be decreased due to various factors such as hairstyle, make-up, accessories, and esthetics [9,10,11]. For these reasons, face recognition applications based on salient features have become widespread in the literature [12,13].
There are many studies in the literature on facial recognition [14,15]. A descriptor called Local Gradient Hexa Pattern (LGHP) was proposed by Chakraborty et al. [16]. This descriptor provided effective facial features with binary micro-patterns. The performance of this study was evaluated using the widely used Cropped Extended Yale B, CMU-PIE, color-FERET, LFW, and Ghallager databases. The performance of the descriptor defined for use in face recognition applications by Zhang et al. [17]. Their proposed descriptor was the local derivative pattern. The proposed descriptor generated features based on local derivative variations. This descriptor initially used the Local Binary Pattern. In this study, FERET, CAS-PEAL, Carnegie Mellon University-Pose, Illumination, and Expression (PIE) database (CMU-PIE), Extended Yale B, and FRGC databases were used for experimental results. The results were evaluated on gray-level and Gabor feature images. A new pattern encoding the direction information of the face image was proposed by Pillai et al. [18], to generate a face feature code. In their study, the face images were separated, and the facial features were considered with respect to their diagonal neighbors. Then, all features were concatenated to obtain the final features. The proposed descriptor was evaluated using FERET, Extended YALE B, ORL, and LFW-a databases. The study was compared with methods commonly used in the literature, such as PCA, local binary pattern (LBP), and local directional number (LDN). A method based on the Chain Code-Based Local Descriptor (CCBLD) for face recognition was presented by Karczmareka et al. [19]. In this study, the feature set was increased by dividing the pixels of the image into blocks. The experimental results were evaluated in AT&T, FERET, CAS-PEAL, the face and gesture recognition network (FG-NET), Essex Collection of Facial Images, and Yale face database. The experimental results were presented in terms of recognition rate, computing time, noise/occlusion presence, and similarity/dissimilarity measures.
An extended LBP-based method was proposed by Li et al. [20]. Pixel differences based on angular and radial differences were used in this study. Yale B, FERET, CAS-PEAL-R1 databases were used by Li et al. [20] for experimental results. Recognition rate was selected as a performance criterion. Gradient-based local descriptors were presented by Su and Yang [21]. The histogram of the gradient phases (HGP) was durable to local geometric and photometric errors. In the study, the success of descriptors was assessed using Yale B and FERET databases. A new descriptor named local edge direction and texture descriptor (LEDTD) was proposed by Li et al. [22]. In the study, the durability of the proposed descriptor was evaluated. The performance of the descriptor was evaluated when four well-known databases (AR, Yale B, CMU PIE, FERET) were used. The results show that the proposed descriptor was more successful than the other descriptors. A new descriptor for the localized ternary pattern for facial expression recognition was presented by Ryu et al. [23]. In this study, a two-level grid approach was used to extract the face structure. The obtained outcomes by Ryu et al. [23] showed that the proposed method was sensitive to facial expressions. Recognition rate and accuracy parameters were used to measure performance in selected databases (Extended Cohn-Kanade, JAFFE, MMI, CMU-PIE, GEMEP-FERA, and BU-3DFE).
In this study, two novel image descriptors and a novel domain are presented. The main aim of this study is to achieve a high classification ability for facial images. Two descriptors are defined in this study with a novel image transformation. The proposed descriptors are compared to 10 commonly used descriptors in the literature. Moreover, a Discrete Wavelet Transform (DWT)-fuzzy set-based domain (FWD) is proposed to increase the performance of 12 descriptors examined in the model. The performance of the proposed method is analyzed based on the accuracy rate. The abbreviations of the used methods are listed in Table 1.

Major Contribution

In this paper, a new Fuzzy wavelet-based face recognition architecture is proposed. This architecture is applied to widely used image descriptors. Moreover, two novel graph structures, Vertical Local Cross Pattern (VLCP) and Horizontal Local Cross Pattern (HLCP), are presented. This method generally consists of two main phases: feature extraction and classification. In the feature extraction phase, fuzzy wavelet domain (FWD) and image descriptors were used to extract salient features. Four classifiers were used in the classification phase. The technical contributions of the current study are given below.
  • Graph theory is a mathematical structure that has been used in many disciplines to solve different real-world problems. In the current study, using graphs, patterns were created for textural feature extraction, and these features were used for texture and face recognition. In this method, two novel graph-based image descriptors are presented to extract salient features for face recognition.
  • FWD is created using DWT and fuzzy set theory. The main aim of the FWD method is to gain a high face recognition capability.
  • In the feature extraction phase, 12 image descriptors, 4 classifiers (Linear Discriminant Analysis (LDA), QDA, SVM, and KNN), and 2 domains (pixel and fuzzy wavelet) were utilized in the classification phase. Therefore, 96 novel face recognition settings (12 × 4 × 2) are proposed by using the proposed architecture, and these methods are compared in this paper.
  • A large benchmark set is given in the experimental results to understand the effects of the image descriptors on facial image recognition with variable domains and classifiers.
The rest of this study is structured as follows. Our proposed method for face recognition is elaborated in Section 2. The obtained results are presented in Section 3. In Section 4, discussions are presented. Finally, we conclude the study in Section 5.

2. The Proposed Method

In this study, a novel face recognition architecture-based on wavelet [24,25,26], fuzzy logic method [12,27], and Local Cross Pattern (LCP) is proposed. There are two major phases: feature extraction and classification phases. In the feature extraction phase, LL, LH, HL, and HH bands were obtained by applying 2D-DWT onto the image. The low-low (LL) is an approximate band that is robust against compression attacks because many methods have used DWT to achieve robustness against JPEG compression attacks. Then, A, B, and C clusters were obtained by using the proposed triangular-based fuzzy method. It should be expressed that the triangle-based fuzzy method is a modified version of Neutrosophy theory [28]. Then, the proposed LCPs, LBP, and the other 9 graph structures were applied to the B cluster. The histogram of the B cluster utilizes descriptors as a feature set with a dimension of 256. In the classification phase, experimental results are presented using SVM [29], KNN [30,31], LDA [32,33], and QDA [34,35,36]. The block diagram of the proposed method is given in Figure 1.
In the following subsections, the components of the proposed architecture are described in more detail.

2.1. The Proposed Fuzzy Set

In this study, a fuzzy set like Neutrosophy is proposed. In the proposed method, three clusters are defined. Hence, membership degrees of the A, B, and C clusters are calculated using the median filter and triangular clusters [12,27]. The proposed fuzzy set is given in Figure 2.
The main aim of the FWD is to propose a novel basic and effective domain using both fuzzy and wavelet methods. According to the literature, Neutrosophy and fuzzy-based Neutrosophy-like transformations are very effective methods for image and signal processing. In this article, a simple and effective fuzzy-based transformation is proposed. A novel Neutrosophy like transformation is also proposed. Firstly, A, B, and C membership degrees of the images are calculated. To calculate these membership degrees, triangular fuzzy sets are. These sets are shown in Figure 2. The mathematical notations of the membership calculations of the proposed triangular fuzzy-based transformation are given in Equations (1)–(4).
A = A A m i n A m a x A m i n
A = m e d f i l t 2 ( I , [ w , w ] )
C = 1 A
B i , j = { A i , j × 2 ,   A 0.5 C i , j × 2 ,   A > 0.5
where w represents the size of overlapping blocks and m e d f i l t 2 ( . ) represents the 2D median filter function.

2.2. Local Cross Pattern (LCP)

In this article, a novel image descriptor is presented for face recognition. This descriptor is called LCP. The main motivation of the LCP is to use a novel graph-based pattern for distinctive feature extraction. The widely used image descriptor is the LBP that extracts features using neighbor pixels. However, the proposed LBP uses cross pixels and extracts features both vertically and horizontally. The proposed LCP is also extendable, because the users can use any block size. We can also define only the minimum sized block for LCP. In the study, the LCP descriptor is analyzed vertically and horizontally. The representation of these two different forms are shown in Figure 3a,b.
To apply the LCP descriptor, in the first step, the image is divided into 4 × 2 and 2 × 4 areas in the vertical and horizontal versions, respectively. The feature sets of the VLCP and HLCP descriptors are obtained by using the following equations:
b 1 = S ( p i ,   j p i + 1 , j + 1 )
b 2 = S ( p i + 1 , j + 1 p i + 2 , j )
b 3 = S ( p i + 2 , j p i + 3 , j + 1 )
b 4 = S ( p i + 3 , j + 1 p i + 3 , j )
b 5 = S ( p i + 3 , j p i + 2 , j + 1 )
b 6 = S ( p i + 2 , j + 1 p i + 1 , j )
b 7 = S ( p i + 1 , j p i , j + 1 )
b 8 = S ( p i , j + 1 p i , j )
S V L C P ( x y ) = { 0 ,   x y < 0 1 ,   x y 0
p i , j V L C P = t = 1 8 b t × 2 8 t
b 1 = S ( p i , j p i + 1 , j + 1 )
b 2 = S ( p i + 1 , j + 1 p i , j + 2 )
b 3 = S ( p i , j + 2 p i + 1 , j + 3 )
b 4 = S ( p i + 1 , j + 3 p i , j + 3 )
b 5 = S ( p i , j + 3 p i + 1 , j + 2 )
b 6 = S ( p i + 1 , j + 2 p i , j + 1 )
b 7 = S ( p i , j + 1 p i + 1 , j )
b 8 = S ( p i + 1 , j p i , j )
S H L C P ( x y ) = { 0 ,   x y < 0 1 ,   x y 0
p i , j H L C P = t = 1 8 b t × 2 8 t
where S ( . , . ) is the signum function, the b set represents extracted bits, p i , j , p i , j H L C P , and p i , j V L C P are the ith and jth pixels of the original HLCP and VLCP images, respectively.
The histogram and images obtained from the VLCP and HLCP descriptors are given in Figure 4.

2.3. Steps of the Proposed Face Recognition Architecture

This study provides a novel face recognition method based on fuzzy wavelets and two new local descriptors. The proposed local descriptors are used in the fuzzy wavelet method. The steps of the proposed method are given as follows.
Step 1: Load the face image.
Step 2: Convert the RGB face image to gray.
g r a y = | I R I G I B | × | 0.2989   0.5870   0.1141 |
where g r a y is a gray image, I R , I G and I B are the red, green, and blue channels of the face image, respectively. It should be noted if a face image is gray, this step is skipped.
Step 3: Apply 2D DWT to the gray image and obtain the LL sub-bands.
[ L L ,   L H ,   H L ,   H H ] = D W T 2 ( g r a y ) ,
where D W T 2 ( . ) is a 2DDWT function with Haar filter, L L ,   L H ,   H L ,   H H are Low-Low, Low-High, High-Low and High-High subbands, respectively.
Step 4: Apply triangle fuzzy transform to the LL sub-band by using Equations (1)–(4) and obtain the B image.
Step 5: Use VLCP, HLCP, or other local descriptors.
B L V C P = V L C P ( B )
B L H C P = H L C P ( B )
where V L C P ( . ) is a function of VLCP and this function is defined in Equations (5)–(14), H L C P ( . ) is a function of HLCP, and this function is defined in Equations (15)–(24), B V L C P and B H L C P are the VLCP and HLCP applied images.
Step 6: Calculate histogram of these images. The histogram of these images is utilized as a feature.
Step 7: Classify face images using this feature. In this paper, four well-known algorithms, including LDA, QDA, SVM, and KNN classifiers, are used.

2.4. The Used Definitions for the Proposed Method

In recent years, machine learning methods have been very popular and effective in solving problems in various fields [37,38,39,40,41,42,43,44]. Thus, to classify the extracted features, the MATLAB 2016a classification learner toolbox was used. This study applies LDA, QDA, SVM, and KNN classifiers. The parameters of these classifiers are given in Table 2. We used 10-fold cross validation to obtain the experimental results.

2.5. Databases

In this study, AT&T, CIE, Face 94, and FERET databases were used. These databases are widely used in many studies. The properties of the databases are given in Table 3.
The performance is analyzed for the LDA, QDA, SVM, and KNN classifiers using (10) widely used descriptors (LBP [50], local graph structure (LGS) [51], symmetric local structure (SLGS) [52], vertical local structure (VLGS) [53], vertical symmetric local structure VSLGS [53], zigzag horizontal local structure (ZHLGS) [53], zigzag horizontal middle local structure (ZHMLGS) [53], zigzag vertical local structure ZVLGS [53], zigzag vertical middle local structure (ZVMLGS) [53], LELGS (logically extended local graph structure) [53]) and the proposed VLGS, and HLGS image descriptors. Here, these descriptors are enumerated using classical descriptors (from 1 to 10) and the proposed descriptors (11,12). Then, fuzzy wavelet FWD-LBP, FWD-LGS, FWD-SLGS, FWD-VLGS, FWD-VSLGS, FWD-ZHLGS, FWD-ZHMLGS, FWD-ZVLGS, FWD-ZHMLGS, FWD-LELGS (logically extended local graph structure), FWD-VLGS, and FWD-HLGS descriptors are presented and enumerated using (13)–(24) in this paper.

3. Performance Analysis

For the analysis of the results, an Intel Core i7 PC with the Windows 10 operating system, on a computer with 16 GB RAM, was utilized. The performance of all methods was investigated using an accuracy measure (see Equation (29)):
A c c ( % ) = T r u e   p r e d i c t e d   i m a g e s T o t a l   i m a g e × 100

3.1. Performance Analysis on AT&T Database

The performance of the descriptors was evaluated using 10 samples in 30 classes for the AT&T (ORL) database. The AT&T database consists of gray images. Accuracy rates are given for the horizontal and vertical forms of the 10 selected descriptors and the proposed LCP descriptors in Table 4.
In Table 4, the average accuracy rates for the LDA, QDA, SVM, and KNN of the descriptors in the AT&T database are calculated as 74.4%, 81.0%, 87.4%, 85.0%, respectively. For four classifiers, the average accuracy rates obtained by the FWD-based method for AT&T database are 83.5%, 87.5%, 90.7%, and 89.1%, respectively. While analyzing the obtained outcomes, we observed that the FWD-based method has a positive effect on the average accuracy rates.

3.2. Performance Analysis on CIE Database

The CIE database is one of the most widely used databases in literature. In the study, 10 samples in 30 classes are used for the CIE database. Table 5 shows the accuracy rates of the selected descriptors.
The performance of the descriptors is presented by the proposed method (fuzzy wavelet domain) in Table 7. For the CIE database, average values of the LDA, QDA, SVM, and KNN classifiers are 97.6%, 99.2%, 99.3%, and 99.5%, respectively, while the average accuracies of the same classifiers (LDA, QDA, SVM, and KNN) with the FWD-based method are 99.8%, 99.9%, 98.6%, and 98.9%, respectively.

3.3. Performance Analysis on Face94 Database

The Face 94 database consists of color images. Ten samples were selected for 30 classes in this database. The accuracy rates of the descriptors are given in Table 6.
The accuracy of the Face 94 database obtained by the proposed FWD-based method is shown in Table 6. The average accuracy rates for LDA, QDA, SVM, and KNN, are 98.9%, 97.7%, 98.7%, and 98.1% and 100%, 99.7%, 99.5%, and 98.8% accuracy rates were achieved after using the fuzzy wavelet and pixel domains, respectively.

3.4. Performance Analysis of the FERET Database

The FERET database is also widely used in the literature. Six samples were selected for 50 classes in the FERET database. The accuracy values of descriptors for LDA, QDA, SVM, and KNN classifiers are presented in Table 7. The average accuracy rates are 73.1%, 74.1%, 91.3%, and 93.1% for LDA, QDA, SVM, and KNN classifiers, respectively.
In Table 6, the performance of the descriptors is calculated by the FWD-based method. According to the obtained results, the average accuracy rates of the LDA, QDA, SVM, and KNN classifiers and the proposed method are 75.3%, 72.1%, 88.4%, 91.5%, respectively.

4. Discussion

  • In the experiments, 12 (descriptor) × 4 (classifier) × 2 (domain) = 96 methods have been compared and a large benchmark set has been obtained. The comprehensive results are given in this section.
  • According to the obtained results, the SVM classifier performed better than other classifiers applied in this study.
  • In Figure 5, the average success of classifiers for databases is presented for each descriptor and domain using boxplot analysis. The bold blue lines and blue boxes represent results of the fuzzy wavelet and pixel domains, respectively. The boxplot analysis gives the minimum, maximum, and average values. To better explain the results, boxplot analysis is used in this section.
The proposed architecture can be programmed and as a result can be used as a practical model. The results show that the proposed method can be successfully used in face recognition applications. Therefore, it can be used to solve real world face recognition problems. Table 7 clearly illustrated that the proposed descriptors achieved satisfactory results. For more clarity on the effectiveness of the proposed methods, the obtained confusion matrixes are given in Figure 6.
As can be seen from confusion matrix of the proposed method, a classification accuracy of 100.0% is achieved for the four datasets used in this study. According to the confusion matrix of the AT&T data set, higher errors occurred in the first and ninth classes. As shown in the confusion matrix of the FERET, the worst classification accuracy rate was 66.67% for each class.
Briefly, two novel descriptors are proposed to extract features from the facial images. In addition, we proposed FWD as a novel domain. FWD is an alternative transformation to Neutrosophy. The first application of the FWD approach is applied to four widely used face images datasets. To show the impact of the FWD approach, our proposed descriptors and several widely used descriptors are used. The positive effects of the FWD for face recognition are demonstrated in Table 4, Table 5, Table 6 and Table 7. Also, these tables proved the success of the proposed VLCP and HLCP. Especially, we used classification accuracy to test the proposed methods. This is an effective performance measuring parameter because the used datasets are homogenous. The best classifier is SVM, because the best results are generally obtained using SVM. After SVM, KNN gained the highest performance rates. These results clearly illustrated that the proposed descriptors and FWD can gain high success rates in larger facial image datasets. The advantages and novelties of this paper are given below.
  • Two effective novel graph-based patterns called VLCP and HLCP are presented.
  • A novel transformation (FWD) is presented, and higher classification accuracies are obtained by using this domain.
  • 96 facial image recognition methods are presented, and significant benchmark results are given.
  • Lightweight methods are presented, because the computational complexity of the proposed methods is calculated as O ( n ) , where n is size of the image.
  • Cognitive methods are proposed.
  • The presented methods can be applied to texture and texture images. As can be seen from the literature, many texture recognition problems have been solved using descriptors.
It should be noted that the demerit of this paper is its use of small face image datasets.

5. Conclusions

In this study, a new descriptor is defined for face recognition applications. The proposed descriptor is analyzed in vertical and horizontal forms. This descriptor is called LCP. By using LBP, graph structures, and the proposed LCPs, 96 methods were applied. Moreover, a novel image domain, which was created by using DWT and triangle fuzzy sets, is proposed. The proposed domain is called Fuzzy Wavelet (FW). In the classification phase, LDA, QDA, SVM, and KNN were utilized as classifiers. By using 12 descriptors, 4 classifiers, and 2 domains, 96 facial image recognition methods were obtained. To evaluate this method and obtain benchmarks, widely used face image datasets (AT&T, CIE, Face94 and FERET) were used. The classification accuracy results of these methods are given in the experiments. Furthermore, the presented descriptors and fuzzy wavelet domain were successful in face recognition.
In our future works, novel lightweight face recognition applications and devices can be developed using the proposed method. Moreover, novel deep neural networks are proposed using FWD and LCP descriptors. Our future intension is to propose one- and two-dimensional VHLCP and ternary VHLCP to classify textures and signals.

Author Contributions

The work presented in this paper was a collaboration of all authors. The manuscript was revised by all authors. Conceptualization: T.T., S.D.; Methodology: T.T., S.D.; Software: T.T., S.D.; Validation: T.T., S.D.; Formal analysis T.T., S.D.; Investigation T.T., S.D.; Resources T.T., S.D.; Data curation T.T., S.D.; Writing—original draft preparation T.T., S.D.; Writing—review and editing T.T., S.D., M.A., M.E.B., P.P.; Visualization T.T., S.D.; Supervision: T.T., S.D., P.P.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Daugman, J.G. High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 1148–1161. [Google Scholar] [CrossRef]
  2. Galbally, J.; Marcel, S.; Fierrez, J. Biometric antispoofing methods: A survey in face recognition. IEEE Access 2014, 2, 1530–1552. [Google Scholar] [CrossRef]
  3. Rzecki, K.; Pławiak, P.; Niedźwiecki, M.; Sośnicki, T.; Leśkow, J.; Ciesielski, M. Person recognition based on touch screen gestures using computational intelligence methods. Inf. Sci. 2017, 415, 70–84. [Google Scholar] [CrossRef]
  4. Pławiak, P.; Sośnicki, T.; Niedźwiecki, M.; Tabor, Z.; Rzecki, K. Hand body language gesture recognition based on signals from specialized glove and machine learning algorithms. IEEE Trans. Ind. Inform. 2016, 12, 1104–1113. [Google Scholar] [CrossRef]
  5. Kim, Y.; Yoo, J.H.; Choi, K. A motion and similarity-based fake detection method for biometric face recognition systems. IEEE Trans. Consum. Electron. 2011, 57, 756–762. [Google Scholar] [CrossRef]
  6. Hjelmås, E.; Low, B.K. Face detection: A survey. Comput. Vis. Image Underst. 2001, 83, 236–274. [Google Scholar] [CrossRef]
  7. Kwak, K.C.; Pedrycz, W. Face recognition using a fuzzy fisherface classifier. Pattern Recognit. 2005, 38, 1717–1732. [Google Scholar] [CrossRef]
  8. Muqeet, M.A.; Holambe, R.S. Local binary patterns based on directional wavelet transform for expression and pose-invariant face recognition. Appl. Comput. Inform. 2017. [Google Scholar] [CrossRef]
  9. Yang, B.; Chen, S. A comparative study on local binary pattern (LBP) based face recognition: LBP histogram versus LBP image. Neurocomputing 2013, 120, 365–379. [Google Scholar] [CrossRef]
  10. Tao, G.; Zhao, X.; Chen, T.; Liu, Z.; Li, S. Image feature representation with orthogonal symmetric local weber graph structure. Neurocomputing 2017, 240, 70–83. [Google Scholar] [CrossRef]
  11. Moghaddam, B.; Jebara, T.; Pentland, A. Bayesian face recognition. Pattern Recognit. 2000, 33, 1771–1782. [Google Scholar] [CrossRef]
  12. Melin, P.; Mendoza, O.; Castillo, O. Face recognition with an improved interval type-2 fuzzy logic sugeno integral and modular neural networks. IEEE Trans. Syst. Manand Cybern. Part A Syst. Hum. 2011, 41, 1001–1012. [Google Scholar] [CrossRef]
  13. Liu, Y.H.; Chen, Y.T. Face recognition using total margin-based adaptive fuzzy support vector machines. IEEE Trans. Neural Netw. 2007, 18, 178–192. [Google Scholar] [CrossRef] [PubMed]
  14. Coşkun, M.; Uçar, A.; Yildirim, Ö.; Demir, Y. Face recognition based on convolutional neural network. In Proceedings of the 2017 International Conference on Modern Electrical and Energy Systems (MEES), Kremenchuk, Ukraine, 15–17 November 2017; pp. 376–379. [Google Scholar]
  15. Baloglu, U.B.; Yıldırım, Ö.; Uçar, A. Person Recognition via Facial Expression Using ELM Classifier Based CNN Feature Maps. In Proceedings of ELM-2017; Springer: Cham, Switzerland, 2017. [Google Scholar]
  16. Chakraborty, S.; Singh, S.; Chakraborty, P. Local gradient hexa pattern: A descriptor for face recognition and retrieval. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 171–180. [Google Scholar] [CrossRef]
  17. Zhang, B.; Gao, Y.; Zhao, S.; Liu, J. Local derivative pattern versus local binary pattern: Face recognition with high-order local pattern descriptor. IEEE Trans. Image Process. 2010, 19, 533–544. [Google Scholar] [CrossRef] [PubMed]
  18. Pillai, A.; Soundrapandiyan, R.; Satapathy, S.; Satapathy, S.C.; Jung, K.H.; Krishnan, R. Local diagonal extrema number pattern: A new feature descriptor for face recognition. Future Gener. Comput. Syst. 2018, 81, 297–306. [Google Scholar] [CrossRef]
  19. Karczmarek, P.; Kiersztyn, A.; Pedrycz, W.; Dolecki, M. An application of chain code-based local descriptor and its extension to face recognition. Pattern Recognit. 2017, 65, 26–34. [Google Scholar] [CrossRef] [Green Version]
  20. Liu, L.; Fieguth, P.; Zhao, G.; Pietikäinen, M.; Hu, D. Extended local binary patterns for face recognition. Inf. Sci. 2016, 358, 56–72. [Google Scholar] [CrossRef]
  21. Su, C.Y.; Yang, J.F. Histogram of gradient phases: A new local descriptor for face recognition. IET Comput. Vis. 2014, 8, 556–567. [Google Scholar] [CrossRef]
  22. Li, J.; Sang, N.; Gao, C. LEDTD: Local edge direction and texture descriptor for face recognition. Signal Process. Image Commun. 2016, 41, 40–45. [Google Scholar] [CrossRef]
  23. Ryu, B.; Rivera, A.R.; Kim, J.; Chae, O. Local directional ternary pattern for facial expression recognition. IEEE Trans. Image Process. 2017, 26, 6006–6018. [Google Scholar] [CrossRef] [PubMed]
  24. Chien, J.T.; Wu, C.C. Discriminant waveletfaces and nearest feature classifiers for face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1644–1649. [Google Scholar] [CrossRef]
  25. Zhang, B.L.; Zhang, H.; Ge, S.S. Face recognition by applying wavelet subband representation and kernel associative memory. IEEE Trans. Neural Netw. 2004, 15, 166–177. [Google Scholar] [CrossRef] [PubMed]
  26. Ekenel, H.K.; Sankur, B. Multiresolution face recognition. Image Vis. Comput. 2005, 23, 469–477. [Google Scholar] [CrossRef]
  27. Chen, Q.; Wu, H.; Yachida, M. Face detection by fuzzy pattern matching. In Proceedings of the IEEE International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 1995; pp. 591–596. [Google Scholar]
  28. Faraji, M.R.; Qi, X. Face recognition under varying illumination based on adaptive homomorphic eight local directional patterns. IET Comput. Vis. 2014, 9, 390–399. [Google Scholar] [CrossRef]
  29. Guo, G.; Li, S.Z.; Chan, K. Face recognition by support vector machines. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 28–30 March 2000; pp. 196–201. [Google Scholar]
  30. Abuzneid, M.A.; Mahmood, A. Enhanced Human Face Recognition Using LBPH Descriptor, Multi-KNN, and Back-Propagation Neural Network. IEEE Access 2018, 6, 20641–20651. [Google Scholar] [CrossRef]
  31. Beyer, K.; Goldstein, J.; Ramakrishnan, R.; Shaft, U. When is “nearest neighbor” meaningful? In International Conference on Database Theory; Springer: Berlin/Heidelberg, Germany, 1999; pp. 217–235. [Google Scholar]
  32. Chen, L.F.; Liao, H.Y.M.; Ko, M.T.; Lin, J.C.; Yu, G.J. A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recognit. 2000, 33, 1713–1726. [Google Scholar] [CrossRef]
  33. Yu, H.; Yang, J. A direct LDA algorithm for high-dimensional data—With application to face recognition. Pattern Recognit. 2001, 34, 2067–2070. [Google Scholar] [CrossRef]
  34. Lu, J.; Plataniotis, K.N.; Venetsanopoulos, A.N. Regularized discriminant analysis for the small sample size problem in face recognition. Pattern Recognit. Lett. 2003, 24, 3079–3087. [Google Scholar] [CrossRef]
  35. Baek, J.; Kim, M. Face recognition using partial least squares components. Pattern Recognit. 2004, 37, 1303–1306. [Google Scholar] [CrossRef]
  36. Lu, J.; Plataniotis, K.N.; Venetsanopoulos, A.N. Regularization studies of linear discriminant analysis in small sample size scenarios with application to face recognition. Pattern Recognit. Lett. 2005, 26, 181–191. [Google Scholar] [CrossRef]
  37. Książek, W.; Abdar, M.; Acharya, U.R.; Pławiak, P. A novel machine learning approach for early detection of hepatocellular carcinoma patients. Cogn. Syst. Res. 2019, 54, 116–127. [Google Scholar] [CrossRef]
  38. Pławiak, P. An estimation of the state of consumption of a positive displacement pump based on dynamic pressure or vibrations using neural networks. Neurocomputing 2014, 144, 471–483. [Google Scholar] [CrossRef]
  39. Pławiak, P.; Acharya, U.R. Novel deep genetic ensemble of classifiers for arrhythmia detection using ECG signals. Neural Comput. Appl. 2019. [Google Scholar] [CrossRef]
  40. Abdar, M.; Zomorodi-Moghadam, M.; Zhou, X.; Gururajan, R.; Tao, X.; Barua, P.D.; Gururajan, R. A new nested ensemble technique for automated diagnosis of breast cancer. Pattern Recognit. Lett. 2019. [Google Scholar] [CrossRef]
  41. Ogiela, M.R.; Tadeusiewicz, R. Nonlinear processing and semantic content analysis in medical imaging—A cognitive approach. IEEE Trans. Instrum. Measurement 2015, 54, 2149–2155. [Google Scholar] [CrossRef]
  42. Tadeusiewicz, R.; Ogiela, L.; Ogiela, M.R. Cognitive analysis techniques in business planning and decision support systems. In International Conference on Artificial Intelligence and Soft Computing; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1027–1039. [Google Scholar]
  43. Ogiela, L.; Tadeusiewicz, R.; Ogiela, M.R. Cognitive analysis in diagnostic DSS-type IT systems. In International Conference on Artificial Intelligence and Soft Computing; Springer: Berlin/Heidelberg, Germany, 2006; pp. 962–971. [Google Scholar]
  44. Panek, D.; Skalski, A.; Gajda, J.; Tadeusiewicz, R. Acoustic analysis assessment in speech pathology detection. Int. J. Appl. Math. Comput. Sci. 2015, 25, 631–643. [Google Scholar] [CrossRef] [Green Version]
  45. Samaria, F.; Harter, A. Parameterisation of a stochastic model for human face identification. In Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 5–7 December 1994; Available online: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html (accessed on 8 May 2019).
  46. Kabacinski, R.; Kowalski, M. Vein pattern database and benchmark results. Electron. Lett. 2011, 47, 1127–1128. [Google Scholar] [CrossRef]
  47. Libor Spacek’s Facial Image Database, “Face94Database”. Available online: http://cswww.essex.ac.uk/mv/allfaces/faces94.html (accessed on 8 May 2019).
  48. Phillips, P.J.; Wechsler, H.; Huang, J.; Rauss, P. The FERET database and evaluation procedure for face recognition algorithms. Image Vis. Comput. J. 1998, 16, 295–306. [Google Scholar] [CrossRef]
  49. Phillips, P.J.; Moon, H.; Rizvi, S.A.; Rauss, P.J. The FERET evaluation methodology for face recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1090–1104. [Google Scholar] [CrossRef]
  50. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  51. Abusham, E.E.A.; Bashir, H.K. Face Recognition Using Local Graph Structure (LGS). In Human-Computer Interaction. Interaction Techniques and Environments. HCI 2011; Lecture Notes in Computer Science; Jacko, J.A., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6762. [Google Scholar]
  52. Abdullah, M.F.A.; Sayeed, M.S.; Muthu, K.S.; Bashier, H.K.; Azman, A.; Ibrahim, S.Z. Face recognition with symmetric local graph structure (SLGS). Expert Syst. Appl. 2014, 41, 6131–6137. [Google Scholar] [CrossRef]
  53. Rakshit, R.D.; Nath, S.C.; Kisku, D.R. Face identification using some novel local descriptors under the influence of facial complexities. Expert Syst. Appl. 2018, 92, 82–94. [Google Scholar] [CrossRef]
Figure 1. The block diagram of the proposed architecture. (a) FWD; (b) pixel domain.
Figure 1. The block diagram of the proposed architecture. (a) FWD; (b) pixel domain.
Symmetry 11 00787 g001
Figure 2. The proposed fuzzy set.
Figure 2. The proposed fuzzy set.
Symmetry 11 00787 g002
Figure 3. The Proposed Descriptors: (a) Vertical Local Cross Pattern (VLCP) and (b) Horizontal Local Cross Pattern (HLCP).
Figure 3. The Proposed Descriptors: (a) Vertical Local Cross Pattern (VLCP) and (b) Horizontal Local Cross Pattern (HLCP).
Symmetry 11 00787 g003
Figure 4. LCP descriptor (a) Original image; (b) VLCP image; (c) HLCP image; (d) histogram of VLCP image; (e) histogram of HLCP image.
Figure 4. LCP descriptor (a) Original image; (b) VLCP image; (c) HLCP image; (d) histogram of VLCP image; (e) histogram of HLCP image.
Symmetry 11 00787 g004aSymmetry 11 00787 g004b
Figure 5. Boxplot analysis of LBP, LGS, SLGS, VLGS, VSLGS, ZHLGS, ZHMLGS, ZVLGS, ZVMLGS, LELGS, VLCP, HLCP for AT&T, CIE, Face94, FERET database in pixel and fuzzy wavelet domains (a) AT&T, (b) CIE, (c) Face94, (d) FERET.
Figure 5. Boxplot analysis of LBP, LGS, SLGS, VLGS, VSLGS, ZHLGS, ZHMLGS, ZVLGS, ZVMLGS, LELGS, VLCP, HLCP for AT&T, CIE, Face94, FERET database in pixel and fuzzy wavelet domains (a) AT&T, (b) CIE, (c) Face94, (d) FERET.
Symmetry 11 00787 g005aSymmetry 11 00787 g005b
Figure 6. Confusion matrix for (a) AT&T, (b) CIE, (c) Face94, (d) FERET.
Figure 6. Confusion matrix for (a) AT&T, (b) CIE, (c) Face94, (d) FERET.
Symmetry 11 00787 g006
Table 1. Abbreviations used in this study.
Table 1. Abbreviations used in this study.
AbbreviationNameAbbreviationName
LCPLocal Cross PatternLBPLocal Binary Pattern
VLCPVertical Local Cross PatternLGSLocal Graph Structure
HLCPHorizontal Local Cross PatternSLGSSymmetric Local Structure
DWTDiscrete Wavelet TransformVSLGSVertical Symmetric Local Structure
FWDFuzzy Wavelet DomainZHLGSZigzag Horizontal Local Structure
LLLow-Low sub-bandZHMLGSZigzag Horizontal Middle Local Structure
LDALinear Discriminant AnalysisZVLGSZigzag Vertical Local Structure
FERETFacial Recognition TechnologyCIEChinese Institute of Electronics
Face94Collection of Facial ImagesAT&T= ORLCambridge Olivetti Research Lab
ZVMLGSZigzag Vertical Middle Local StructureLELGSlogically extended local graph structure
YALE BYale Face Database BFG-NETface and gesture recognition network
VLGSVertical Local StructureCMU-PIECarnegie Mellon University-Pose, Illumination, and Expression
Table 2. The used attributes of the applied machine learning methods.
Table 2. The used attributes of the applied machine learning methods.
MethodAttributesValue
LDARegularizationDiagonal Covariance
QDARegularizationDiagonal Covariance
SVMKernel functionQuadratic
Box constraint level1
Kernel scale modeAuto
Manuel kernel scale1
Multiclass methodOne-vs-All
Standardize dataTrue
PCADisable
KNNNumber of neighbors1
Distance metricCity block
Distance weightEqual
Standardize dataTrue
PCADisable
Table 3. Image datasets considered in the experimental results.
Table 3. Image datasets considered in the experimental results.
NoDatabaseClassesSamples Per ClassTotal SamplesSample Resolution (Pixel)Image Format
1AT&T [45] Symmetry 11 00787 i001301030092 × 112Gray, .jpeg
2CIE [46] Symmetry 11 00787 i00230103002048 × 1536RGB, .jpeg
3Face94 [47] Symmetry 11 00787 i0033010300180 × 200RGB, .jpeg
4FERET [48,49] Symmetry 11 00787 i004506300512 × 768RGB, .jpeg
Table 4. The results for the AT&T database.
Table 4. The results for the AT&T database.
MethodsClassifiersMethodsClassifiers
LDAQDASVMKNNLDAQDASVMKNN
169.779.375.374.31377.381.375.366.3
273.783.091.791.31484.789.096.394.3
376.782.791.791.71585.390.097.095.3
479.388.093.792.01685.392.797.396.7
581.089.793.794.31788.393.096.396.0
668.777.382.776.71883.784.087.085.3
771.377.083.777.71981.085.387.790.3
872.378.384.083.02082.085.388.789.7
969.777.3082.079.72178.786.087.786.0
1067.774.082.373.32278.779.781.378.3
1184.386.094.395.02389.094.796.094.3
1278.079.394.090.72488.089.097.397.0
Table 5. The results for CIE database.
Table 5. The results for CIE database.
MethodsClassifiersMethodsClassifiers
LDAQDASVMKNNLDAQDASVMKNN
198.099.097.396.713100.099.792.790.7
297.798.799.7100.014100.0100.0100.0100.0
397.799.799.3100.015100.0100.0100.0100.0
498.799.0100.0100.01699.799.7100.0100.0
598.799.7100.0100.017100.099.7100.0100.0
696.398.799.399.71899.3100.096.799.0
796.799.099.099.71999.7100.098.398.7
897.098.799.799.72099.7100.098.099.7
997.099.099.399.72199.799.798.798.3
1097.7100.098.098.32299.799.798.099.3
1198.799.3100.0100.02399.399.7100.0100.0
1297.399.799.7100.024100.0100.0100.0100.0
Table 6. The results for the Face94 database.
Table 6. The results for the Face94 database.
MethodsClassifiersMethodsClassifiers
LDAQDASVMKNNLDAQDASVMKNN
199.097.398.396.713100.099.796.091.0
299.098.099.399.014100.0100.0100.0100.0
399.098.399.399.315100.0100.0100.0100.0
499.096.399.098.716100.099.799.799.7
598.797.399.098.717100.099.799.799.7
697.396.798.396.318100.099.7100.099.3
797.797.097.796.719100.099.399.799.0
899.397.798.798.720100.099.799.799.3
999.097.097.397.021100.099.799.799.0
1098.396.798.097.022100.099.399.398.7
1199.799.7100.099.723100.0100.0100.099.7
12100.099.7100.0100.024100.0100.0100.0100.0
Table 7. The results for the FERET database.
Table 7. The results for the FERET database.
MethodsClassifiersMethodsClassifiers
LDAQDASVMKNNLDAQDASVMKNN
177.062.789.391.31377.364.089.391.0
269.372.093.093.31476.777.095.394.0
371.775.792.393.71577.375.094.095.0
471.378.393.094.01676.775.795.395.0
575.084.094.395.31777.377.796.096.0
672.374.794.094.71873.365.779.382.0
773.770.094.795.31973.765.081.388.7
874.078.794.394.02072.068.384.790.0
974.074.792.094.32169.368.078.088.3
1073.068.774.784.02269.766.377.086.7
1177.077.092.095.32381.081.795.396.3
1269.072.392.092.32478.781.094.794.7

Share and Cite

MDPI and ACS Style

Tuncer, T.; Dogan, S.; Abdar, M.; Ehsan Basiri, M.; Pławiak, P. Face Recognition with Triangular Fuzzy Set-Based Local Cross Patterns in Wavelet Domain. Symmetry 2019, 11, 787. https://doi.org/10.3390/sym11060787

AMA Style

Tuncer T, Dogan S, Abdar M, Ehsan Basiri M, Pławiak P. Face Recognition with Triangular Fuzzy Set-Based Local Cross Patterns in Wavelet Domain. Symmetry. 2019; 11(6):787. https://doi.org/10.3390/sym11060787

Chicago/Turabian Style

Tuncer, Turker, Sengul Dogan, Moloud Abdar, Mohammad Ehsan Basiri, and Paweł Pławiak. 2019. "Face Recognition with Triangular Fuzzy Set-Based Local Cross Patterns in Wavelet Domain" Symmetry 11, no. 6: 787. https://doi.org/10.3390/sym11060787

APA Style

Tuncer, T., Dogan, S., Abdar, M., Ehsan Basiri, M., & Pławiak, P. (2019). Face Recognition with Triangular Fuzzy Set-Based Local Cross Patterns in Wavelet Domain. Symmetry, 11(6), 787. https://doi.org/10.3390/sym11060787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop