Next Article in Journal
Low-Cost Polyethylene Terephthalate Fluidic Sensor for Ultrahigh Accuracy Measurement of Liquid Concentration Variation
Next Article in Special Issue
Role of Artificial Intelligence in COVID-19 Detection
Previous Article in Journal
An Electromagnetic Time-Reversal Imaging Algorithm for Moisture Detection in Polymer Foam in an Industrial Microwave Drying System
Previous Article in Special Issue
Application of Deep Learning Models for Automated Identification of Parkinson’s Disease: A Review (2011–2021)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features

1
Symbiosis Institute of Technology, Symbiosis International, Deemed University, Pune 412115, India
2
Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International, Deemed University, Pune 412115, India
3
Pimpri Chinchwad College of Engineering, Savitribai Phule Pune University (SPPU), Pune 411044, India
4
Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology, Sydney, NSW 2007, Australia
5
Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
6
Department of Geology & Geophysics, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(21), 7408; https://doi.org/10.3390/s21217408
Submission received: 26 August 2021 / Revised: 1 October 2021 / Accepted: 28 October 2021 / Published: 8 November 2021

Abstract

:
Iris biometric detection provides contactless authentication, preventing the spread of COVID-19-like contagious diseases. However, these systems are prone to spoofing attacks attempted with the help of contact lenses, replayed video, and print attacks, making them vulnerable and unsafe. This paper proposes the iris liveness detection (ILD) method to mitigate spoofing attacks, taking global-level features of Thepade’s sorted block truncation coding (TSBTC) and local-level features of the gray-level co-occurrence matrix (GLCM) of the iris image. Thepade’s SBTC extracts global color texture content as features, and GLCM extracts local fine-texture details. The fusion of global and local content presentation may help distinguish between live and non-live iris samples. The fusion of Thepade’s SBTC with GLCM features is considered in experimental validations of the proposed method. The features are used to train nine assorted machine learning classifiers, including naïve Bayes (NB), decision tree (J48), support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), and ensembles (SVM + RF + NB, SVM + RF + RT, RF + SVM + MLP, J48 + RF + MLP) for ILD. Accuracy, precision, recall, and F-measure are used to evaluate the performance of the projected ILD variants. The experimentation was carried out on four standard benchmark datasets, and our proposed model showed improved results with the feature fusion approach. The proposed fusion approach gave 99.68% accuracy using the RF + J48 + MLP ensemble of classifiers, immediately followed by the RF algorithm, which gave 95.57%. The better capability of iris liveness detection will improve human–computer interaction and security in the cyber-physical space by improving person validation.

Graphical Abstract

1. Introduction

Automatic access to a system by a genuine person has become very simple in the information era. For automated system access, validation of user identity is crucial. Biometric authentication systems are computer-based systems that use biometric traits to verify a user’s identity. The biometric authentication system has more advantages over other password-based conventional authentication mechanisms [1]. The biometric system eliminates the need to remember a password or pin or keep a card in possession. The conventional security system cannot differentiate between real persons and impostors, those who can unethically access a program [2]. For security-critical cyber applications, biometric authentication may also be considered an additional layer of authentication along with existing conventional authentication modes [3]. As the iris has a complex texture and unique features, it is widely used to identify and authenticate a person in most applications [2], e.g., in the Aadhaar card project to identify India’s citizens. The Amsterdam airport and United States–Canadian border [4] use iris-based authentication. Compared to fingerprints and facial recognition, iris-based authentication enables a more reliable contactless detection of a user. The contactless approach helps to prevent the spread of viruses and diseases such as COVID-19. Even though the iris has a unique textural pattern, an impostor might counterfeit it.
People usually attack the biometric system to gain access to other people’s accounts or to hide their true identity. The iris detection system can be easily spoofed by using different types of spoofing attacks. Table 1 shows the iris presentation attacks used that are found in the literature.
Analyzing threat and vulnerability is crucial for securing the biometric system. The challenging threat of spoofing a biometric authentication system is mitigated with liveness detection of acquired biometric traits before authentication [9].
The critical contributions of the research work presented here are as follows:
  • Development of Thepade’s sorted block truncation coding (TSBTC) and gray-level co-occurrence matrix (GLCM) iris image data as features for the first time in iris liveness detection (ILD).
  • Implementation of the fusion of best TSBTC N-ary global features with GLCM local features from an iris image, for the first time in ILD.
  • Performance analysis of ML classifiers and ensembles to finalize the best classifier for ILD.
  • Validating the performance of the proposed ILD method across various existing benchmark datasets and techniques.
The paper is organized as follows: Section 2 briefly reviews the related literature, Section 3 presents an overview of existing methods, and Section 4 presents the proposed ILD method. The experiment details, observed results, and inferences drawn from the results are discussed in Section 5. The concluding remarks and future research directions are discussed in Section 6.

2. Related Work

Many attempts have been made to detect the liveness of the sensed biometric traits before they are authenticated. A few prominent approaches are discussed in this section. Kaur et al. [10] used a rotation-invariant feature set consisting of Zernike moments and polar harmonic transforms that extract local intensity variations to detect iris spoofing attacks. The spoofing attacks on various sensors also have a considerable effect on the overall efficiency of a system. A system can detect only print and contact lens attacks. They used Hough transform and GLCM to extract features from iris images. These extracted features were passed for discriminant analysis (DA), used as a classification tool for differentiating live images from spoofed ones.
Agarwal et al. [11] used fingerprints and iris identity for liveness detection. The standard Haralick’s statistical features based on the GLCM and neighborhood gray-tone difference matrix (NGTDM) generate a feature vector from the fingerprint. Texture features from the iris are used to boost the performance of a system. They used a standard dataset to test if the performance of this model is better than the existing model. In the existing system, GLCM has a huge feature vector size.
In a recent paper, Jusman et al. (2020) [12] compared the proposed approach with other existing approaches and proved that the proposed approach performs better, with 100% accuracy. The limitation of this study is that the authors followed a traditional approach of segmentation, normalization, and feature extraction, which is a complex and time-consuming task. Subsequently, Khuzani et al. [13] extracted the shape, density, FFT, GLCM, GLDM, and wavelet features from iris images. In total, 2000 iris images from the CASIA-Iris-Interval dataset are used for implementation. The highest accuracy of 99.64% was achieved by using a multilayer neural network.
Agarwal et al. (2020) used a feature descriptor, i.e., local binary hexagonal extreme pattern, for fake iris detection [14]. The proposed descriptor exploits the relationship between the center pixel and its hexa neighbor. The hexagonal shape using the six-neighbor approach is preferable to the rectangular structure due to its higher symmetry. This approach’s limitation is that it covers only print and contact lens attacks and is highly complex [14]. Thavalengal et al. (2016) [15] developed a smartphone system that captures RGB and NIR images of the eye and the iris. Pupil localization techniques with distance metrics are used for detection. For feature vector generation, 4096 elements are considered, which are extensive. Even though the authors claimed a reasonable liveness detection rate, they worked with a real-time database.
TSBTC has been used many times in the literature in other domains, but none of the studies has identified iris liveness detection using TSBTC. Some of the studies from other domains are discussed here. Dewan et al. (2021) used TSBTC to retrieve images from datasets using the key points extraction method [16]. Chaudhari et al. (2021) used a fusion of TSBTC and Sauvola thresholding features [17]. With the help of multiple classifiers, including SVM, Kstar, J48, RF, RT, and ensembles, the authors achieved good accuracy. To enhance image classification, Thepade et al. (2018) used TSBTC with feature-level fusion of Niblack thresholding and SVM, RF, Bayes net, and ensembles of classifiers [18].
In their work, Fathy and Ali (2018) did not consider the segmentation and normalization phases typically used in fake iris detection systems [8]. Wavelet packets (WPs) are used to break down the original image into a wavelet. They claimed 100% accuracy, but it does not work with all types of attacks, and it covered only limited spoofing attacks. Hu et al. (2016) performed ILD using regional features [19]. Regional features are designed based on the relationship of the features with the neighboring regions. During an experiment, Hu used 144 relational measures based on regional features. Czajka (2015) designed a liveness detection system using pupil dynamics [20]. In this system, pupil reaction is measured with the help of sudden changes in light intensity. If the eye reacts to light intensity changes, then the eye is live; otherwise, it is fake. Linear and non-linear support vector machines are used to classify natural reactions and spontaneous oscillations in this work. The limitation of the system measures diverse functions, which take time. The data used in this analysis do not include any measurements from older people, so there is inaccuracy in the observation [20].
Naqvi et al. (2020) developed a system to detect accurate ocular regions, such as the iris and sclera [21]. This system is based on the convolutional neural network (CNN) model with a lite-residual encoder–decoder network. Average segmentation error is used to evaluate the segmentation results. Publicly available databases are considered for evaluating the system. Kimura et al. (2020) designed a liveness detection system using CNN, which improves the model’s accuracy by tuning hyperparameters [22]. To measure the performance of the system, attack presentation classification error rate (APCER) and bonafide presentation classification error rate (BPCER) are used. The hyperparameters considered in this paper are the number of epochs (max), batch size, learning rate, and weight decay hyperparameters. This system works only for print and contact lens attacks.
Lin and Su (2019) developed a face anti-spoofing and liveness detection system using CNN [23]. The image is resized to 256 * 256, and RGB and HSV color spaces are used. The author claims better iris liveness prediction [23]. Long and Zeng (2019) identified ILD with the help of the BNCNN architecture with 18 layers. The batch normalization technique is used in BNCNN to avoid the problem of overfitting and gradient disappearing during training [24].
Dronky et al. (2019) [25] observed from the literature that many researchers do not identify all iris attacks. So, from the existing literature, it is observed that researchers have worked on a few iris attacks, and a prominent feature vector size is considered. Table 2 summarizes the literature review in ascending order of the year of publication.

3. Proposed Iris Liveness Detection Using a Feature-Level Fusion of TSBTC and GLCM

Iris recognition system is susceptible to many security challenges. These vulnerabilities make a system less reliable for highly secured applications [3]. This paper attempts ILD using the feature-level fusion of GLCM and TSBTC features of iris images, which detect whether the iris is live or fake. The proposed approach avoids any preprocessing, such as segmentation, normalization, and localization, conventionally used by the methods proposed in the literature, making the proposed approach swifter and relatively easier [15]. The only preprocessing done in the proposed approach is resizing the iris image to square size. Figure 1 shows the block diagram of the ILD system. The proposed system is divided into four phases: iris image resizing, feature formation, classification, and ILD.

3.1. Resizing

Iris preprocessing plays a vital role in ILD. In the proposed algorithm, two iris preprocessing approaches are followed. Images are acquired using four different standard datasets, so each dataset uses a different size of images to be stored. During preprocessing, the original images are normalized to the size 128 * 128, which maintains integrity throughout the experiment. While images capture different datasets using different sensors, some sensors (e.g., LG, Congent, Vista) capture images in the RGB format, and some (e.g., LG, Dalsa) capture them in the grayscale format. To maintain uniqueness, images are converted into the grayscale format.

3.2. Feature Formation and Fusion

In the proposed method, feature fusion is attempted with the help of GLCM and Thepade’s SBTC applied on iris images.

3.2.1. GLCM

The statistical distribution information of the gray-level value of an image is generated by GLCM [27,28]. GLCM is applied to a resized iris image. Figure 2 shows feature formation using GLCM. We computed four features: contrast, energy, entropy, and correlation using GLCM. These are given by:
Energy: Local gray-level consistency represents energy as expressed in Equation (1), which is high in similar pixels.
  E n e r g y = i , j P   i , j   2
Entropy: The following image entropy equation is used to describe an image’s randomness. The greater the entropy, the more difficult it is to arrive at any conclusion from the data.
E n t r o p y = i , j P   i , j   l o g 2   P   i , j  
Contrast: As expressed in Equation (3), contrast is used to evaluate the intensity difference between the reference pixel and its neighbor. The low-intensity value represents the GLCM’s poor contrast.
C o n t r a s t = i , j | i j | 2   P i , j
Correlation: Equation (4) represents the linear dependency of gray-level values in the co-occurrence matrix.
C o r r e l a t i o n = i , j ( i u i ) ( j u j ) P i , j   σ i σ j
These four features for one image are taken into consideration. The 10 cross-validation technique is used for the correct estimation of accuracy.

3.2.2. TSBTC

Let the iris image be I (r,c) of size r × c pixels, grayscale. The TSBTC [29,30] feature vector of N-ary may be considered as [T1, T2, Tn]. Here, Ti indicates the ith cluster centroids of the grayscale image using TSBTC N-ary. In TSBTC 2-ary, for iris image I (r,c) of size r × c pixels, the grayscale image is converted into a one-dimensional array sorted as sortrows. Using this one-dimensional sorted array, the TSBTC-2ary feature vector is computed as [T1, T2], as shown in Equations (5) and (6). Figure 3 shows how features are extracted using TSBTC.
T 1 = 2 r × c   p = 1 r × c S o r t ( p )
T 2 = 2 r × c   p = 1 + r × c 2   r × c S o r t ( p )
Here is the proposed ILD. TSBTC has experimented with all 10 variations of TSBTC 2-ary, 3-ary, 4-ary, 5-ary, 6-ary, 7-ary, 8-ary, 9-ary, 10-ary, and 11-ary with a resized iris image. These extracted features are passed to classifiers and ensembles of classifiers to train them.

3.2.3. Fusion of TSBTC and GLCM

The best performance from TSBTC N-ary and local-level GLCM features are concatenated to get the feature-level fusion for ILD. Here, both fusions are considered TSBTC 10-ary + GLCM and TSBTC 11-ary + GLCM. Let the grayscale iris image be I (r,c) of size r × c pixels. The fusion of the feature vector of TSBTC N-ary and GLCM can be represented as [T1, T2, Tn, G1, G2, G3, G4]. Figure 4 displays how feature vectors are formed by taking the fusion of GLCM and TSBTC. Individual feature vector elements can be extracted using the mathematical model elaborated in Section 3.2.1 and Section 3.2.2

3.3. Classification and Iris Liveness Detection

The proposed approach of ILD uses different ML classifiers with an ensemble combination. The tenfold cross-validation approach is used for training these classifiers for ILD. Tenfold cross-validation is one of the best approaches for training ML classifiers. It provides all samples from the dataset a chance to be used as training or test data, resulting in a trained classifier that is less biased. The ML classifiers employed here are support vector machine (SVM), naïve Bayes (NB), random forest (RF), random tree, and J48, with ensembles of a few of the ML classifiers. Majority of voting logic is used here for creating the ensembles of ML classifiers.
SVM—Its main aim is that in an N (number of features)-dimensional space, it must find a hyperplane that distinctly classifies the data points. The objective of an SVM is to find a plane with the maximum distance between data points of classes [31].
J48—It is a classification algorithm. It is the decision tree-based classification algorithm as that of the decision tree classifier [31].
Random Forest—It takes the mean prediction of the individual trees formed from an ensemble of various decision trees. This algorithm makes sure that the decision tree classifier’s drawback of overfitting the training data is overcome [31].
Random Tree—Random tree is a parameter-based supervised learning algorithm with continuous data splitting. The random tree algorithm is similar to the decision tree algorithm, which is made by selecting random features [32].
Naïve Bayes—This algorithm is based on the theorem of Bayes and is a collection of classification algorithms. It is a family of algorithms, as all of them share a common objective. It predicts probabilities belonging to a class for each data point [32]
Ensemble method—It is always better to use multiple models simultaneously on a single set for classification rather than just a single model. This method is called ensemble learning [17]. A model is trained by using different classifiers, and the final output is an ensemble of those classifiers. Majority voting logic has been used for an ensemble of ML classifiers in the proposed method.

4. Experimental Set-Up

The experiments were performed using an Intel (R) Core (TM) i3-6006U CPU @ 2.0 GHz, 12 GB RAM, and 64-bit operating system with MATLAB R2015a as a programming platform. The experimentation code is available on request. The datasets used for experimental explorations of the proposed approach of ILD are Clarkson LiveDet2013, Clarkson LiveDet2015, IIITD Contact Lens, and IIITD Combined Spoofing.

4.1. Description of the Dataset

The detailed description of the four standard and publicly available datasets is as follows.
  • Clarkson LivDet2013—Clarkson LivDet2013 dataset has around 1536 iris images [33]. This dataset is separated into training and testing sets. To acquire images, the Dalsa sensor is used. During this experiment, the training set images are used. Table 3 shows details related to the dataset, the sensors used to acquire images, and the number of images used during this experiment. Figure 5 shows samples of images from the dataset.
  • Clarkson LivDet2015—Images used in this dataset are captured using Dalsa and LG sensors [34]. Images are divided into three categories: live, pattern, and printed. In total, 25 subjects are used for live images and patterns are printed; 15 subjects each are used. The whole dataset is partitioned into training and testing.
  • IIITD Combined Spoofing Database—Images used in this dataset are captured using two iris sensors, Cogent and Vista [35]. The images are divided into three categories: normal, print-scan attack, and print-capture attack.
  • IIITD Contact Lens—Images used in this dataset are captured using two iris sensors, Cogent dual iris sensor and Vista FA2E single iris sensor [36,37]. The images are di-vided into three categories: normal, transparent, and colored. In total, 101 subjects are used. Both left and right iris images of each subject are captured; therefore, there are 202 iris classes.

4.2. Performance Measures

To compare the performance of all the investigated variations of the proposed ILD method, accuracy, recall, F-measure, and precision are used as performance metrics.
Let TP, TN, FP, and FN, respectively, be the true positive, true negative, false positive, and false negative of the ILD. TP indicates the data samples that are predicted as live iris and are actually live samples. TN gives the data samples detected as spoofed iris and actually spoofed iris samples. FP indicates the samples identified as live but are fake. FN shows the data samples detected as spoofed but are live iris samples. The confusion matrix is shown in Figure 6. Equations (7)–(13) were used to calculate the accuracy, precision, recall, F-measure, attack presentation classification error rate (APCER) [38], normal presentation classification error rate (NPCER), and average classification error rate (ACER), respectively.
Accuracy = TP   +   TN FP   +   FN   +   TP   +   TN
Precision = TP FP   +   TP
Recall = TP TP   +   TN
F measures = 2   Precision   Recall Precision   +   Recall
  APCER   = FP FP   +   FN
NPCER   = FN FN   +   TP
  ACER   = APCER   +   NPCER 2

5. Results

This section is organized into three subsections. Section 5.1 presents the results and graphs of the TSBTC approach. Section 5.2 presents the results of the GLCM technique. The fusion of TSBTC and GLCM is discussed in Section 5.3.

5.1. TSBTC Results

The proposed approach of ILD experiments with four benchmark datasets. Accuracy, recall, precision, and F-measure are used as performance metrics to evaluate the variants of the proposed approach of ILD. With 128 ×128 iris images, TSBTC experiments with all 10 varieties of the TSBTC 2-ary, 3-ary, 4-ary, 5-ary, 6-ary, 7-ary, 8-ary, 9-ary, 10-ary, and 11-ary. These extracted features are passed to classifiers and ensembles of classifiers to train them.
The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of ILD tested on the Clarkson 2013 dataset is shown in Figure 7. It can be observed that 10-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the Clarkson 2013 dataset.
From Table 4, it is observed that the highest ILD accuracy comes to around 94.16% with 6-ary TSBTC using the RF classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. The underlined values indicate the highest obtained recognition rates.
The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of the ILD tested on the Clarkson 2015 dataset is shown in Figure 8. As per comparison, 11-ary TSBTC outperforms other N-ary TSBTC approaches for the Clarkson 2015 dataset for all classifiers.
From Table 5, it is observed that the highest observed ILD accuracy comes to around 95.64% with 10-ary TSBTC using the RF classifier, immediately followed by an ensemble of RF + SVM + RT classifiers.
The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of ILD tested on the IIITD Contact dataset is shown in Figure 9. It can be observed that 11-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the IIITD Contact dataset.
From Table 6, it is observed that the highest observed ILD accuracy comes to around 76.73% with 11-ary TSBTC using the random forest classifier, immediately followed by an ensemble of RF + SVM + RT classifiers.
Figure 10 shows the performance comparison of the global features of TSBTC N-ary considered for specific ML classifiers in the proposed approach of ILD tested on the IIITD Combined Spoofing dataset. It can be observed that 10-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the IIITD Combined Spoofing dataset.
From Table 7, it is observed that the highest observed ILD accuracy comes to around 99.57% with 7-ary TSBTC using an ensemble of J48 + RF + MLP classifiers, immediately followed by RF classifiers.

5.2. GLCM Results

In the proposed ILD, features are extracted using GLCM by using equations explained in Section 3.2.1. These extracted features are passed to classifiers and ensembles of classifiers to train them.
Figure 11 shows the performance comparison of the GLCM local features considered for specific ML classifiers in the proposed approach of ILD tested across all datasets. Here, it can be observed that random forest and ensembles of RF + SVM + MLP give the best performance across all datasets.
The performance evaluation of GLCM local features across all datasets for specific ML classifiers in the proposed approach of ILD using percentage accuracy is shown in Figure 12. The graph shows that IIITD Combined Spoofing gives good performance across all classifiers and ensembles of classifiers.

5.3. Fusion of TSBTC and GLCM Results

The best performance from TSBTC N-ary and GLCM local-level features are concatenated to get the feature-level fusion for ILD. Here, both fusions are considered, TSBTC 10-ary + GLCM and TSBTC 11-ary + GLCM.
Figure 13, Figure 14, Figure 15 and Figure 16 show the performance comparison of TSBTC, GLCM, and the fusion of the global features of TSBTC N-ary and local features of GLCM for specific ML classifiers in the proposed approach of ILD tested on Clarkson 2013, Clarkson 2015, IIITD Contact, and IIITD Combined Spoofing dataset, respectively. Here, an observed fusion of TSBTC and GLCM gives the best performance across all datasets.
From Table 8, it is observed that the highest ILD accuracy comes to around 93.78% with the fusion of TSBTC’s highest performance with GLCM features using the RF classifier for the Clarkson 2013 dataset. The highest accuracy comes to around 95.57%, with the fusion of TSBTC’s highest performance with GLCM features using the RF classifier for the Clarkson 2015 dataset. For the IIITD Contact dataset, the highest accuracy comes to around 78.88%, with the fusion of TSBTC’s highest performance with GLCM features by using the RF classifier. The highest observed ILD accuracy comes to around 99.68%, with the fusion of TSBTC’s highest performance with GLCM features using RF and an ensemble of J48 + RF + MLP classifiers for the IIITD Combined Spoofing dataset.
Figure 17 shows the performance comparison of the fusion of global features of TSBTC N-ary and GLCM local features considered for specific ML classifiers in the proposed approach of ILD tested on all datasets. Here, the fusion of TSBTC and GLCM gives the best performance for all datasets used during the experiments. The highest accuracy achieved is 99.68% for the IIITD Combined Spoofing dataset, which shows that it outperforms others.
It is observed from Table 9 that ensembles of J48 + RF + MLP classifiers give the highest accuracy (99.68%) and lowest rate of ACER (0.48%) using the fusion of TSBTC and GLCM local features in the proposed approach of ILD used with the IIITD Combined Spoofing dataset.

6. Discussions

Based on the current experiments, GLCM and TSBTC are the widely utilized image feature extraction methods. Thepade’s sorted block truncation coding (TSBTC) has been employed in various image classification applications. For the very first time, TSBTC has been designed to assess iris presentation attacks. The feature vector generated by the methods described in Section 3.2 is then supplied as an input to machine learning and ensembles of classifiers described in Section 3.3 using the Weka tool. Groupings of TSBTC and GLCM features are used to achieve feature-level fusion. For testing purposes, four standard datasets are used: Clarkson 2013, Clarkson 2015, IIITD Contact, and IIITD Combined Spoofing; additionally, databases such as Clarkson 2017 and CASIA can be examined in the future.
GLCM, a local feature extraction approach that has delivered an excellent average classification accuracy, as stated in Section 5.2, has been seen in investigations of iris presentation attack detection. As explained in Section 5.1, TSBTC has shown better accuracy versus GLCM. A fusion of TSBTC with GLCM has provided the best iris presentation attack detection accuracy as 93.78% for the Clarkson 2013 dataset, 95.57% for the Clarkson 2015 dataset, 78.88% for the IIITD Contact dataset, and 99.68% for the IIITD Combined Spoofing datasets. A comparison of the performance of different machine learning classifiers such as naïve Bayes, SVM, random forest, J48, and multilayer perceptron and ensembles of SVM + RF + NB, SVM + RF + RT, RF + SVM + MLP, and J48 + RF + MLP are used for classification accuracy of live and spoof iris detection. J48 + RF + MLP ensembles of classifiers have given a maximum accuracy of 99.68%. Though TSBTC has shown promise in the image classification of colored images for various applications such as land usage identification, gender classification, and so on, it has also shown promising results for the detection of iris presentation attacks. The experimental results showed that the proposed approach efficiently identifies iris spoofing attacks using various sensors.
The feature-level fusion of local GLCM and global TSBTC can distinguish between live and faked artifacts and offer improved outcomes compared to the latest state-of-the-art approaches. The findings show that our proposed approach decreases classification error and improves accuracy compared with the previous approaches used to detect presentation attacks in an iris detection system. This has been tabulated in Table 10. The proposed approach is compared to recent research done in this area, and it has already been concluded that this approach outperforms other methods. Therefore, it works with images of 128 × 128 pixels. Only 10 variations of TSBTC are used during implementation.

7. Conclusions

This paper proposed the novel method of ILD to prevent iris spoofing through a textured lens and print attacks. The proposed approach identified both kinds of print attacks (capture/scan) and detected iris spoofing attempted using different sensors. Many approaches have been used in preprocessing as iris segmentation, normalization, and localization, all of which are computationally based on the ILD method. In this research, TSBTC and GLCM features are extracted directly from iris images to overcome this drawback. Feature-level fusions are carried out using global TSBTC and local GLCM features. Various ML algorithms and their ensemble combinations are trained using these fusion features of iris images. The experimental validation of the proposed liveness detection approach is done on four benchmark datasets.
The performance comparison of variants of the proposed approach is made using ISO/IEC biometric performance evolution metrics, including APCER, NPCER, ACER, accuracy precision, recall, and F-measures. For the Clarkson 2013 dataset, fake images are identified with 93.78% accuracy, whereas for Clarkson 2015, the accuracy of the dataset achieved is 95.57% with the RF model. The accuracy obtained for IIITD Contact is 78.88%, and for IIITD Combined Spoofing it is 99.68%. Comparing the performances with Iris Liveness Detection Competition (LivDet-Iris) 2020, the proposed approach got the lowest ACER of 0.48%. The experimental results showed that the proposed approach efficiently identifies iris spoofing attacks using various sensors. In future work, this framework may be extended with the best performance features. Currently, the presented work explored Thepade’s SBTC as a global representation of iris content. However, the local content presentation of Thepade’s SBTC would be an exciting exploration in the future. Moreover, the proposed fusion framework may be applied for the liveness detection of other biometric traits.

Author Contributions

Data curation: S.K.; writing—original draft: S.K.; supervision: S.G. and B.P.; project administration: S.G. and B.P.; conceptualization: S.D.T.; methodology: S.D.T. and S.G.; validation: B.P.; visualization: S.D.T., S.K., S.G. and B.P.; resources: B.P. and A.A.; writing—review and Editing: B.P.; funding acquisition: B.P. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, the University of Technology Sydney, Australia. This research was also supported by Researchers Supporting Project number RSP-2021/14, King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability

The data supporting this study’s findings are available on request from the corresponding author. The data are not publicly available due to the privacy concern of research participants.

Acknowledgments

Thanks to the Symbiosis Institute of Technology, Symbiosis International (Deemed University), and Symbiosis Centre for Applied Artificial Intelligence for supporting this research. We thank all individuals for their expertise and assistance throughout all aspects of our study and for their help in writing the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Su, L.; Shimahara, T. Advanced iris recognition using fusion techniques. NEC Tech. J. 2019, 13, 74–77. [Google Scholar]
  2. Lee, H.; Park, S.H.; Yoo, J.H.; Jung, S.H.; Huh, J.H. Face recognition at a distance for a stand-alone access control system. Sensors 2020, 20, 785. [Google Scholar] [CrossRef] [Green Version]
  3. Khade, S.; Ahirrao, S.; Thepade, S. Bibliometric Survey on Biometric Iris Liveness Detection. Available online: https://www.proquest.com/openview/e3b5291b23d16de13ce0f0bd5dcb004b/1?pq-origsite=gscholar&cbl=54903 (accessed on 31 October 2021).
  4. Kaur, J.; Jindal, N. A secure image encryption algorithm based on fractional transforms and scrambling in combination with multimodal biometric keys. Multimed. Tools Appl. 2019, 78, 11585–11606. [Google Scholar] [CrossRef]
  5. Choudhary, M.; Tiwari, V.; Venkanna, U. An approach for iris contact lens detection and classification using ensemble of customized DenseNet and SVM. Futur. Gener. Comput. Syst. 2019, 101, 1259–1270. [Google Scholar] [CrossRef]
  6. Chen, Y.; Zhang, W. Iris Liveness Detection: A Survey. In Proceedings of the 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September 2018. [Google Scholar] [CrossRef]
  7. Trokielewicz, M.; Czajka, A.; Maciejewicz, P. Human iris recognition in post-mortem subjects: Study and database. In Proceedings of the 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, NY, USA, 6–9 September 2016. [Google Scholar]
  8. Fathy, W.S.A.; Ali, H.S. Entropy with Local Binary Patterns for Efficient Iris Liveness Detection. Wirel. Pers. Commun. 2018, 102, 2331–2344. [Google Scholar] [CrossRef]
  9. Khade, S.; Thepade, S.D. Novel Fingerprint Liveness Detection with Fractional Energy of Cosine Transformed Fingerprint Images and Machine Learning Classifiers. In Proceedings of the 2018 IEEE Punecon, Pune, India, 30 November–2 December 2018. [Google Scholar] [CrossRef]
  10. Kaur, B.; Singh, S.; Kumar, J. Cross-sensor iris spoofing detection using orthogonal features. Comput. Electr. Eng. 2019, 73, 279–288. [Google Scholar] [CrossRef]
  11. Agarwal, R.; Jalal, A.S.; Arya, K.V. A multimodal liveness detection using statistical texture features and spatial analysis. Multimed. Tools Appl. 2020, 79, 13621–13645. [Google Scholar] [CrossRef]
  12. Jusman, Y.; Cheok, N.S.; Hasikin, K. Performances of proposed normalization algorithm for iris recognition. Int. J. Adv. Intell. Inform. 2020, 6, 161–172. [Google Scholar] [CrossRef]
  13. Khuzani, A.Z.; Mashhadi, N.; Heidari, M.; Khaledyan, D. An approach to human iris recognition using quantitative analysis of image features and machine learning. arXiv 2020, arXiv:2009.05880. [Google Scholar] [CrossRef]
  14. Agarwal, R.; Jalal, A.S.; Arya, K.V. Local binary hexagonal extrema pattern (LBHXEP): A new feature descriptor for fake iris detection. Vis. Comput. 2020, 37, 1357–1368. [Google Scholar] [CrossRef]
  15. Thavalengal, S.; Nedelcu, T.; Bigioi, P.; Corcoran, P. Iris liveness detection for next generation smartphones. IEEE Trans. Consum. Electron. 2016, 62, 95–102. [Google Scholar] [CrossRef]
  16. Dewan, J.H.; Thepade, S.D. Feature fusion approach for image retrieval with ordered color means based description of keypoints extracted using local detectors. J. Eng. Sci. Technol. 2021, 16, 482–509. [Google Scholar]
  17. Thepade, S.D.; Chaudhari, P.R. Land Usage Identification with Fusion of Thepade SBTC and Sauvola Thresholding Features of Aerial Images Using Ensemble of Machine Learning Algorithms. Appl. Artif. Intell. 2021, 35, 154–170. [Google Scholar] [CrossRef]
  18. Thepade, S.D.; Sange, S.; Das, R.; Luniya, S. Enhanced Image Classification with Feature Level Fusion of Niblack Thresholding and Thepade’s Sorted N-Ary Block Truncation Coding using Ensemble of Machine Learning Algorithms. In Proceedings of the 2018 IEEE Punecon, Pune, India, 30 November–2 December 2018. [Google Scholar] [CrossRef]
  19. Hu, Y.; Sirlantzis, K.; Howells, G. Iris liveness detection using regional features. Pattern Recognit. Lett. 2016, 82, 242–250. [Google Scholar] [CrossRef]
  20. Czajka, A. Pupil dynamics for iris liveness detection. IEEE Trans. Inf. Forensics Secur. 2015, 10, 726–735. [Google Scholar] [CrossRef]
  21. Naqvi, R.A.; Lee, S.W.; Loh, W.K. Ocular-net: Lite-residual encoder decoder network for accurate ocular regions segmentation in various sensor images. In Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Korea, 19–22 February 2020. [Google Scholar] [CrossRef]
  22. Kimura, G.Y.; Lucio, D.R.; Britto, A.S.; Menotti, D. CNN hyperparameter tuning applied to iris liveness detection. arXiv 2020, arXiv:2003.00833. [Google Scholar]
  23. Lin, H.Y.S.; Su, Y.W. Convolutional neural networks for face anti-spoofing and liveness detection. In Proceedings of the 2019 6th International Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019. [Google Scholar] [CrossRef]
  24. Long, M.; Zeng, Y. Detecting iris liveness with batch normalized convolutional neural network. Comput. Mater. Contin. 2019, 58, 493–504. [Google Scholar] [CrossRef]
  25. Dronky, M.R.; Khalifa, W.; Roushdy, M. A Review on Iris Liveness Detection Techniques. In Proceedings of the 2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 8–10 December 2019. [Google Scholar] [CrossRef]
  26. Kushwaha, R.; Singal, G.; Nain, N. A Texture Feature Based Approach for Person Verification Using Footprint Bio-Metric; Springer: Heidelberg, Germany, 2020; pp. 1581–1611. ISBN 0123456789. [Google Scholar]
  27. Raiu, V.; Vidyasree, P.; Patel, A. Ameliorating the Accuracy Dimensional Reduction of Multi-modal Biometrics by Deep Learning. In Proceedings of the 2021 IEEE Aerospace Conference (50100), Big Sky, MT, USA, 6–13 March 2021. [Google Scholar] [CrossRef]
  28. Rasool, R.A. Feature-Level vs. Score-Level Fusion in the Human Identification System. Appl. Comput. Intell. Soft Comput. 2021, 2021, 6621772. [Google Scholar] [CrossRef]
  29. Khairnar, S.; Thepade, S.D.; Gite, S. Effect of image binarization thresholds on breast cancer identification in mammography images using OTSU, Niblack, Burnsen, Thepade’s SBTC. Intell. Syst. Appl. 2021, 10–11, 200046. [Google Scholar] [CrossRef]
  30. Dewan, J.H.; Thepade, S.D. Image Retrieval using Weighted Fusion of GLCM and TSBTC Features. In Proceedings of the 2021 6th International Conference for Convergence in Technology (I2CT), Maharashtra, India, 2–4 April 2021. [Google Scholar] [CrossRef]
  31. Khade, S.; Thepade, S.D.; Ambedkar, A. Fingerprint Liveness Detection Using Directional Ridge Frequency with Machine Learning Classifiers. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018. [Google Scholar] [CrossRef]
  32. Khade, S.; Thepade, S.D. Fingerprint liveness detection with machine learning classifiers using feature level fusion of spatial and transform domain features. In Proceedings of the 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA), Pune, India, 19–21 September 2019. [Google Scholar] [CrossRef]
  33. Yambay, D.; Doyle, J.S.; Bowyer, K.W.; Czajka, A.; Schuckers, S. LivDet-iris 2013-Iris Liveness Detection Competition 2013. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017. [Google Scholar] [CrossRef]
  34. Yambay, D.; Walczak, B.; Schuckers, S.; Czajka, A. LivDet-Iris 2015-Iris Liveness Detection Competition 2015. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017. [Google Scholar]
  35. Kohli, N.; Yadav, D.; Vatsa, M.; Singh, R.; Noore, A. Detecting medley of iris spoofing attacks using DESIST. In Proceedings of the 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, NY, USA, 6–9 September 2016. [Google Scholar] [CrossRef]
  36. Yadav, D.; Kohli, N.; Doyle, J.S.; Singh, R.; Vatsa, M.; Bowyer, K.W. Unraveling the Effect of Textured Contact Lenses on Iris Recognition. IEEE Trans. Inf. Forensics Secur. 2014, 9, 851–862. [Google Scholar] [CrossRef]
  37. Kohli, N.; Yadav, D.; Vatsa, M.; Singh, R. Revisiting iris recognition with color cosmetic contact lenses. In Proceedings of the 2013 International Conference on Biometrics (ICB), Madrid, Spain, 4–7 June 2013. [Google Scholar] [CrossRef]
  38. Boyd, A.; Fang, Z.; Czajka, A.; Bowyer, K.W. Iris presentation attack detection: Where are we now? Pattern Recognit. Lett. 2020, 138, 483–489. [Google Scholar] [CrossRef]
  39. Das, P.; McFiratht, J.; Fang, Z.; Boyd, A.; Jang, G.; Mohammadi, A.; Purnapatra, S.; Yambay, D.; Marcel, S.; Trokielewicz, M.; et al. Iris Liveness Detection Competition (LivDet-Iris)-The 2020 Edition. In Proceedings of the 2020 IEEE International Joint Conference on Biometrics (IJCB), Houston, TX, USA, 6 January 2021. [Google Scholar] [CrossRef]
  40. Arora, S.; Bhatia, M.P.S.; Kukreja, H. A Multimodal Biometric System for Secure User Identification Based on Deep Learning. Adv. Intell. Syst. Comput. 2021, 1183, 95–103. [Google Scholar] [CrossRef]
  41. Omran, M.; Alshemmary, E.N. An Iris Recognition System Using Deep convolutional Neural Network. J. Phys. Conf. Ser. 2020, 1530, 012159. [Google Scholar] [CrossRef]
  42. Zhao, T.; Liu, Y.; Huo, G.; Zhu, X. A Deep Learning Iris Recognition Method Based on Capsule Network Architecture. IEEE Access 2019, 7, 49691–49701. [Google Scholar] [CrossRef]
  43. Wang, K.; Kumar, A. Cross-spectral iris recognition using CNN and supervised discrete hashing. Pattern Recognit. 2019, 86, 85–98. [Google Scholar] [CrossRef]
  44. Cheng, Y.; Liu, Y.; Zhu, X.; Li, S. A Multiclassification Method for Iris Data Based on the Hadamard Error Correction Output Code and a Convolutional Network. IEEE Access 2019, 7, 145235–145245. [Google Scholar] [CrossRef]
  45. Chatterjee, P.; Yalchin, A.; Shelton, J.; Roy, K.; Yuan, X.; Edoh, K.D. Presentation Attack Detection Using Wavelet Transform and Deep Residual Neural Net; Springer: Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the proposed ILD using feature fusion of TSBTC with GLCM features.
Figure 1. Block diagram of the proposed ILD using feature fusion of TSBTC with GLCM features.
Sensors 21 07408 g001
Figure 2. GLCM feature extraction from an iris image.
Figure 2. GLCM feature extraction from an iris image.
Sensors 21 07408 g002
Figure 3. TSBTC feature extraction from an iris image.
Figure 3. TSBTC feature extraction from an iris image.
Sensors 21 07408 g003
Figure 4. Fusion of GLCM and TSBTC feature vectors from iris image.
Figure 4. Fusion of GLCM and TSBTC feature vectors from iris image.
Sensors 21 07408 g004
Figure 5. Sample iris images from an iris dataset.
Figure 5. Sample iris images from an iris dataset.
Sensors 21 07408 g005
Figure 6. Confusion matrix for iris liveness detection.
Figure 6. Confusion matrix for iris liveness detection.
Sensors 21 07408 g006
Figure 7. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for Clarkson 2013 dataset using percentage accuracy.
Figure 7. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for Clarkson 2013 dataset using percentage accuracy.
Sensors 21 07408 g007
Figure 8. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for the Clarkson 2015 dataset using percentage accuracy.
Figure 8. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for the Clarkson 2015 dataset using percentage accuracy.
Sensors 21 07408 g008
Figure 9. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for the IIITD Contact dataset using percentage accuracy.
Figure 9. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for the IIITD Contact dataset using percentage accuracy.
Sensors 21 07408 g009
Figure 10. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for the IIITD Combined Spoofing dataset using percentage accuracy.
Figure 10. Performance evaluation of TSBTC N-ary global feature variations for the specific ML classifiers in the proposed approach of ILD for the IIITD Combined Spoofing dataset using percentage accuracy.
Sensors 21 07408 g010
Figure 11. Performance evaluation of GLCM local features for the specific ML classifiers in the proposed approach of ILD across all datasets using percentage accuracy.
Figure 11. Performance evaluation of GLCM local features for the specific ML classifiers in the proposed approach of ILD across all datasets using percentage accuracy.
Sensors 21 07408 g011
Figure 12. Performance evaluation of GLCM local features across all datasets for the specific ML classifiers in the proposed approach of ILD using percentage accuracy.
Figure 12. Performance evaluation of GLCM local features across all datasets for the specific ML classifiers in the proposed approach of ILD using percentage accuracy.
Sensors 21 07408 g012
Figure 13. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for the Clarkson 2013 dataset using percentage accuracy.
Figure 13. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for the Clarkson 2013 dataset using percentage accuracy.
Sensors 21 07408 g013
Figure 14. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for the Clarkson 2015 dataset using percentage accuracy.
Figure 14. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for the Clarkson 2015 dataset using percentage accuracy.
Sensors 21 07408 g014
Figure 15. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for IIITD Contact.
Figure 15. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for IIITD Contact.
Sensors 21 07408 g015
Figure 16. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for IIITD Combined Spoofing dataset.
Figure 16. Performance evaluation of TSBTC, GLCM, and fusion of TSBTC and GLCM local features for the specific ML classifiers in the proposed approach of ILD for IIITD Combined Spoofing dataset.
Sensors 21 07408 g016
Figure 17. Performance evaluation of the fusion of TSBTC and GLCM local features across all datasets for the specific ML classifiers in the proposed approach of ILD.
Figure 17. Performance evaluation of the fusion of TSBTC and GLCM local features across all datasets for the specific ML classifiers in the proposed approach of ILD.
Sensors 21 07408 g017
Table 1. Iris presentation attacks.
Table 1. Iris presentation attacks.
Iris Presentation AttacksDetails
Print attacksThe impostor offers a printed image of validated iris to the biometric sensor [4].
Contact lens attacksThe impostor wears contact lenses on which the pattern of the genuine iris is printed [5].
Video attacksThe impostor plays a video of a registered user in front of a biometric system [6].
Cadaver attacksThe impostor uses the eye of a dead person in front of a biometric system [7].
Synthetic attacksThe impostor embeds the iris region into the authentic images to make the synthesized images more realistic [8].
Table 2. Summary of literature review.
Table 2. Summary of literature review.
PaperAuthor/YearFeature ExtractionAttacks
Identified
DatasetsClassifiersPerformances
ID
[17]Thepade and Chaudhari, 2021TSBTC and Sauvola thresholdingNANRSVM, Kstar, J48, RF, RT and ensemblesAccuracy, F-measures.
[16]Dewan and Thepade, 2021TSBTCNANANAARA = 63.31%
[12]Jusman et al., 2020Hough transform, GLCMNRCASIA-IrisDiscriminant analysis classifiersAccuracy = 100%
[11]Agarwal et al., 2020Texture feature, GLCMPrintATVs (iris) LivDet2011 (finger) IIITD CLI dataset (iris)SVMACA = 96.3%
[14]Agarwal et al., 2020Local binary hexagonal extrema patternContact PrintIIITD CLI ATVS-FIrSVMAER = 1.8 %,
[13]Khuzani et al., 2020Shape, density, FFT, GLCM, GLDM, and waveletNRCASIA-Iris-IntervalMultilayer neural networkAccuracy = 99.64%
[26]Kush- waha et al., 2020GLCM, HOG, LBPNABiometric 220X6 human footprint datasetKNN, SVM, LDA, ensemblesAccuracy = 97.9%
[22]Kimura et al., 2020CNNPrint contactClarkson, Warsaw, IIITD-WVU, Notre Dame APCER = 4.18% BPCER= 0%
[21]Naqvi et al., 2020CNN model with a lite-residual encoder–decoder networkNANICE-II dataset, SBVPICNNAverage segmentation error = 0.0061
[24]Long and Zeng, 2019BNCNNSynthetic, contactCASIA-Iris-Lamp, CASIA-Iris-Syn, ND contactBNCNNCorrect recognition rate= 100%
[21]Asmara et al., 2019GLCM, Gabor filter CASIA v1 IrisNavies Bayes, SVMAccuracy = 95.24%
[3]Kaur et al., 2019Orthogonal rotation-invariant feature set comprising of ZMs and PHTsPrint + scan, print + capture, patterned contact lensesIIITD-CLI, IIS, Clarkson LivDet-Iris 2015, Warsaw LivDet-Iris 2015KNNAccuracy = 98.49% (given different accuracy for different datasets)
[8]Fathy and Ali, 2018Wavelet packets (WPs), local binary pattern (LBP), entropyPrint + syntheticATVS-Fir CASIA-Iris-SynSVMACA = 99.92% recall, precision, F1
[18]Thepade et al., 2018TSBTC, NiblackNRNRSVM, RF, ensembles, Bayes netAccuracy = 68.56%
[15]Thavalen- gal et al., 2016Pupil localization techniques with distance metrics are used for detectionPrintReal-time datasetsBinary tree classifierACER = 0%
[19]Hu et al., 2016LBP, histogram, SIDContact lenses, printClarkson, Warsaw, Notre Dame, MobBIOfakeSVMER, Clarkson = 7.87%, Warsaw = 6.15% ND = 0.08%, MobBIOfake = 1.50%
Table 3. Several images were used for the experiment from each dataset.
Table 3. Several images were used for the experiment from each dataset.
DatabaseSensorImage CategoryNo. of Images Used for the Experiment
Clarkson 2013DalsaOff (live)350
Pattern (contact)440
Clarkson 2015DalsaLive378
Pattern356
Printed1416
LGLive258
Pattern433
Printed844
IIITD Combined SpoofingCogentNormal2024
Print-capture1113
Print-scan980
VistaNormal2024
Print-capture1092
Print-scan1196
IIITD ContactCogentNormal422
Transparent1131
Textured1150
VistaNormal1010
Transparent1010
Textured1010
Table 4. Performance evaluation using accuracy for variants of the proposed approach of ILD with N-ary TSBTC and ML classifiers used for the Clarkson 2013 dataset.
Table 4. Performance evaluation using accuracy for variants of the proposed approach of ILD with N-ary TSBTC and ML classifiers used for the Clarkson 2013 dataset.
Classifiers/Ensembles of ClassifiersAccuracy
2-ary3-ary4-ary5-ary6-ary7-ary8-ary9-ary10-ary11-aryAVG
NB83.5283.6783.5283.9684.1184.1184.1184.2584.484.483.96
J4886.1587.686.5888.0489.586.8888.9290.0891.6988.7788.38
SVM86.5886.7386.7386.7386.7386.5886.4486.4486.5886.4486.62
RF89.0693.2993.1493.8794.1693.2993.1493.2993.879393.01
MLP86.4486.5886.4486.2986.5887.987.987.987.688.3387.07
SVM + RF + NB86.4486.8887.1787.1787.1787.4686.7386.8887.1786.8887.01
SVM + RF + RT88.779392.1292.4192.5692.1291.5492.1292.7192.5691.93
RF + SVM + MLP86.5886.7386.8886.8886.7387.1787.1787.1787.3187.4686.96
J48 + RF + MLP87.1789.588.6289.9490.0890.0890.3791.3992.2791.2589.94
AVG86.74688.2287.9188.3788.6288.488.4888.8489.2988.79——
Bold values indicate the highest obtained recognition rates.
Table 5. Performance evaluation using accuracy for variants of the proposed approach of ILD with the N-ary TSBTC and ML classifiers used for the Clarkson 2015 dataset.
Table 5. Performance evaluation using accuracy for variants of the proposed approach of ILD with the N-ary TSBTC and ML classifiers used for the Clarkson 2015 dataset.
Classifiers/Ensembles of ClassifiersAccuracy
2-ary3-ary4-ary5-ary6-ary7-ary8-ary9-ary10-ary11-aryAVG
NB64.8564.7164.4464.5764.1664.1664.0363.8963.8963.8964.26
J4875.6180.7982.1585.9687.8789.190.0590.3291.4191.2886.45
SVM57.0858.1759.461.1760.4960.6260.8961.361.4461.9860.25
Random Forest83.7889.6491.9694.2794.6895.594.9595.595.6495.2393.12
MLP77.1178.6178.4774.2580.2476.0288.8289.189.2391.1482.3
SVM + RF + NB66.0768.3970.773.5673.2974.6573.9775.6176.1576.1572.85
SVM + RF + RT83.7888.1490.7392.2392.7792.593.4693.7391.9693.8691.32
RF + SVM + MLP78.7479.1581.676.773.4374.3870.0275.3468.871.6674.98
J48 + RF + MLP82.2885.8387.3288.9690.0592.099192.91949489.84
AVG74.36777.04878.5379.0779.6679.8980.881.9781.3982.13——
Bold values indicate the highest obtained recognition rates.
Table 6. Performance evaluation using accuracy for variants of the proposed approach of ILD with the N-ary TSBTC and ML classifiers used for the IIITD Contact dataset.
Table 6. Performance evaluation using accuracy for variants of the proposed approach of ILD with the N-ary TSBTC and ML classifiers used for the IIITD Contact dataset.
Classifiers/Ensembles of ClassifiersAccuracy
2-ary3-ary4-ary5-ary6-ary7-ary8-ary9-ary10-ary11-aryAVG
NB46.4145.8244.3243.8843.7344.0244.4763.9163.826450.438
J4864.4762.0864.4761.1964.7762.9863.8866.3765.465.5864.119
SVM61.0461.0461.0461.0461.0461.0461.0464.0964.0964.0961.955
Random Forest63.8867.6171.6470.4475.5274.9275.9775.4176.2976.7372.841
MLP57.6162.5359.759.761.4961.7962.3865.5865.2366.1162.212
SVM + RF + NB60.8960.7461.4961.9462.5363.8865.2265.6765.5865.7563.369
SVM + RF + RT61.6466.7171.0468.3571.9471.0474.6273.2273.5775.6870.781
RF + SVM + MLP61.0462.0861.7961.6462.0861.6461.9466.2866.1966.3763.087
J48 + RF + MLP61.4965.8268.3566.8668.0568.568.9568.4868.4869.7167.369
AVG59.8361.60362.64861.67163.46163.31264.27467.66767.62768.22——
Bold values indicate the highest obtained recognition rates.
Table 7. Performance evaluation using accuracy for variants of the proposed approach of ILD with N-ary TSBTC and the ML classifiers used for IIITD Combined Spoofing dataset.
Table 7. Performance evaluation using accuracy for variants of the proposed approach of ILD with N-ary TSBTC and the ML classifiers used for IIITD Combined Spoofing dataset.
Classifiers Ensembles of ClassifiersAccuracy
2-ary3-ary4-ary5-ary6-ary7-ary8-ary9-ary10-ary11-aryAVG
NB90.0994.9995.195.294.9994.9994.8895.295.292.4994.31
J4898.0898.2998.6198.6198.0899.2599.0498.9399.1498.898.68
SVM96.4896.5996.897.0197.0197.1297.1297.4497.5597.7497.08
Random Forest97.9798.6198.8299.2599.1499.4699.3699.3699.2599.1899.04
MLP98.9399.1499.2599.1499.1499.1499.0499.0499.0498.7899.06
SVM + RF + NB96.2796.5996.9197.0197.0197.2397.2397.5597.6598.1997.16
SVM + RF + RT97.8798.498.599.0499.0499.2599.1499.2599.0499.1798.87
RF + SVM + MLP98.6198.7298.7298.9398.9398.9398.9398.9399.0498.1998.79
J48 + RF + MLP98.498.7298.8299.0499.0499.5799.3699.2599.3699.1599.07
AVG96.9697.7897.9498.1398.0498.3298.2398.3298.3697.96——
Bold values indicate the highest obtained recognition rates.
Table 8. Performance comparison of GLCM, TSBTC, and fusion of TSBTC and GLCM across all classifiers using an average percentage of accuracy, precision, recall, and F-ratio values.
Table 8. Performance comparison of GLCM, TSBTC, and fusion of TSBTC and GLCM across all classifiers using an average percentage of accuracy, precision, recall, and F-ratio values.
Classifiers/Ensembles of ClassifiersAccuracy in Percentage (%)
Clarkson 2013Clarkson 2015IIITD ContactIIITD Combined Spoofing
TSBTC + GLCMGLCMTSBTCTSBTC + GLMGLCMTSBTCTSBTC + GLCMGLCMTSBTCTSBTC + GLCMGLCMTSBTC
NB83.7481.7784.4065.3367.8463.8958.9159.5163.9197.5592.8693.85
J4891.5483.2390.2391.3473.9791.3569.3172.9465.4999.0496.4398.97
SVM86.7382.2186.5161.9959.9461.7164.0973.2064.0998.9995.5197.65
Random Forest93.7882.7993.4495.5782.0195.4478.8875.3676.5199.5796.7399.22
MLP88.2683.8187.9790.3077.1190.1964.1373.0165.6799.5798.0698.91
SVM + RF + NB86.8883.3887.0379.4364.9876.1569.1873.6465.6799.2096.6397.92
SVM + RF + RT91.8482.5092.6494.3481.3392.9176.4274.7974.6399.5797.0499.11
RF + SVM + MLP87.6183.8187.3982.4975.4770.2367.5173.3266.2899.6297.5598.62
J48 + RF + MLP92.6383.8191.7694.0079.7094.0076.0774.7269.1099.6897.3499.26
AVG89.2283.0389.0483.8673.5981.7669.3972.2867.9399.2096.4698.16
Bold values indicate the highest obtained recognition rates.
Table 9. Performance evaluation using accuracy for the proposed approach of ILD for various datasets used during implementation.
Table 9. Performance evaluation using accuracy for the proposed approach of ILD for various datasets used during implementation.
DatasetsClassifiersAccuracy in %Precision in %Recall in %F-Measure in %APECR in %NPCER in %ACER in %
Clarkson 2013Random Forest93.7895.5086.2090.607.904.126.01
Clarkson 2015Random Forest95.5796.5095.5096.004.723.474.09
IIITD_ContactRandom Forest78.8879.3079.4078.6021.5620.2820.92
IIITD_SpoofingJ48 + RF + MLP99.6899.8099.8099.800.120.840.48
Bold values indicate the highest obtained recognition rates.
Table 10. The comparative analysis/study of the proposed approach and prevailing methods.
Table 10. The comparative analysis/study of the proposed approach and prevailing methods.
Author/YearFeature ExtractionDataset Performance Measure ClassifiersResults (%)
P. Das et al., 2021
[39]
MSU PAD1
MSU PAD2
Notre Dame PAD
Clarkson University (CU), University of Notre Dame (ND), and Warsaw University of Technology (WUT)APCER,
BPCER,
ACER
SVM, RF, MLP and CNN.ACER = 2.61
ACER = 2.18
ACER = 28.96
Arora et al., 2021 [40]CNNIIITDAccuracy
FAR
VGGNetAcc = 97.98
LeNetAcc = 89.38
ConvNetAcc = 98.99
Omran and Alshemmary 2020 [41]CNN, IRISNetIIITDSensitivity, accuracy, specificity, precision recall, G mean, and F-measure(SVM,
KNN, NB, DT
Acc = 96.43
Zhao et al., 2019
[42]
Mask R-CNNIIITDAccuracyR-CNN, CNNAcc = 98.9
Wang and Kumar 2019 [43]CNN-SDH, CNN-Joint BayesianPolyU bi-spectraAccuracyCNN, SDHAcc = 90.71
Cheng et al., 2019 [44]CNNCASIA-Iris-LAccuracyHadamard + CNNAcc = 97.41
Chatterjee et al., 2019 [45]DWT, ResNetATVSAccuracyResNetAcc = 92.57
Proposed ApproachTSBTC, GLCM, Fusion of TSBTC and GLCMClarkson 2013 Accuracy, precision, recall, and F-measureRandom ForestAcc = 93.78
Clarkson 2015Random ForestAcc= 95.57
IIITD ContactRandom ForestAcc = 78.88
IIITD Combined SpoofingJ48 + RF + MLPAcc = 99.68
ACER = 0.48
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khade, S.; Gite, S.; Thepade, S.D.; Pradhan, B.; Alamri, A. Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features. Sensors 2021, 21, 7408. https://doi.org/10.3390/s21217408

AMA Style

Khade S, Gite S, Thepade SD, Pradhan B, Alamri A. Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features. Sensors. 2021; 21(21):7408. https://doi.org/10.3390/s21217408

Chicago/Turabian Style

Khade, Smita, Shilpa Gite, Sudeep D. Thepade, Biswajeet Pradhan, and Abdullah Alamri. 2021. "Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features" Sensors 21, no. 21: 7408. https://doi.org/10.3390/s21217408

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop