Next Article in Journal
The Methodology of Adaptive Levels of Interval for Laser Speckle Imaging
Previous Article in Journal
Precision Ice Detection on Power Transmission Lines: A Novel Approach with Multi-Scale Retinex and Advanced Morphological Edge Detection Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iris Recognition System Using Advanced Segmentation Techniques and Fuzzy Clustering Methods for Robotic Control

by
Slim Ben Chaabane
1,2,*,
Rafika Harrabi
1,2,* and
Hassene Seddik
2
1
Computer Engineering Department, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 47512, Saudi Arabia
2
Laboratoire de Robotique Intelligente, Fiabilité Et Traitement du Signal Image (RIFTSI), ENSIT-Université de Tunis, Tunis 1008, Tunisia
*
Authors to whom correspondence should be addressed.
J. Imaging 2024, 10(11), 288; https://doi.org/10.3390/jimaging10110288
Submission received: 18 September 2024 / Revised: 22 October 2024 / Accepted: 29 October 2024 / Published: 8 November 2024
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)

Abstract

:
The idea of developing a robot controlled by iris movement to assist physically disabled individuals is, indeed, innovative and has the potential to significantly improve their quality of life. This technology can empower individuals with limited mobility and enhance their ability to interact with their environment. Disability of movement has a huge impact on the lives of physically disabled people. Therefore, there is need to develop a robot that can be controlled using iris movement. The main idea of this work revolves around iris recognition from an eye image, specifically identifying the centroid of the iris. The centroid’s position is then utilized to issue commands to control the robot. This innovative approach leverages iris movement as a means of communication and control, offering a potential breakthrough in assisting individuals with physical disabilities. The proposed method aims to improve the precision and effectiveness of iris recognition by incorporating advanced segmentation techniques and fuzzy clustering methods. Fast gradient filters using a fuzzy inference system (FIS) are employed to separate the iris from its surroundings. Then, the bald eagle search (BES) algorithm is employed to locate and isolate the iris region. Subsequently, the fuzzy KNN algorithm is applied for the matching process. This combined methodology aims to improve the overall performance of iris recognition systems by leveraging advanced segmentation, search, and classification techniques. The results of the proposed model are validated using the true success rate (TSR) and compared to those of other existing models. These results highlight the effectiveness of the proposed method for the 400 tested images representing 40 people.

1. Introduction

The concept of a robot controlled by iris movement to aid individuals with physical disabilities is an innovative concept with the potential for substantial improvements in their overall quality of life. This technology has the capacity to offer greater autonomy and enhanced interaction with the environment, thereby addressing specific challenges faced by those with mobility limitations.
This technology has the potential to empower individuals facing limited-mobility challenges, significantly improving their capacity to engage with and navigate their surroundings. The impact of movement disabilities on the lives of physically disabled individuals is substantial, and this innovative technology aims to address these challenges by providing a means for more independent and effective interaction with the environment. Hence, there is need to develop a robot that can be controlled using iris movement.
The iris stands out as one of the safest and most accurate biometric authentication methods [1,2,3,4]. Unlike external features, like the hands and face, the iris is an internal organ, shielded and, as a result, less susceptible to damage [5,6,7]. This inherent protection contributes to the reliability and durability of iris-based authentication systems.
In their work, the authors [8] proposed a new method for iris recognition using the scale-invariant feature transformation (SIFT). The initial step involves extracting SIFT characteristic features, followed by a matching process between two images. This matching is executed by comparing associated descriptors at each local extremum. Experimental results obtained using the BioSec multimodal database reveal that the integration of SIFT with a matching approach yields significantly superior performance compared to those of various existing methods.
An alternative model discussed in [9] is the sped-up robust feature (SURF) model. The proposed model in [9] suggests the potential advantages of SURF in the context of iris recognition or other related applications. This model introduces a method that emphasizes efficient and robust feature extraction. SURF enhances the speeds of feature detection and description, making it particularly suitable for applications with real-time constraints. This model extracts unique features from annular iris images, which results in satisfactory recognition rates.
In their work, the authors [10] developed a system for iris recognition using a SURF key extracted from normalized and enhanced iris images. This meticulous process is designed to yield high-accuracy iris recognition, displaying the effectiveness for employing SURF-based techniques in enhancing the precision of biometric systems.
With the same objective, Masek [11] utilized the Canny edge detector and the circular Hough transform to effectively detect iris boundaries. The feature extraction process involves log-Gabor wavelets, and recognition is achieved through the application of the Hamming distance. This approach highlights the integration of various image-processing techniques to enhance iris recognition accuracy [11].
In [12], the authors introduced an iris recognition system that focuses on characterizing local variations within image structures. The approach involves constructing a one-dimensional (1D) intensity signal, capturing essential local variations from the original 2D iris image. Additionally, Gaussian–Hermite moments of intensity signals are employed as distinctive features. This classification process utilizes the cosine similarity measure and a nearest center classifier. This work demonstrates a systematic approach to iris recognition by emphasizing local image variations and employing specific features for accurate classification.
Recently researchers have studied the integration of machine-learning techniques for iris recognition. In [13], the authors present a model that uses artificial neural networks for personal identification purposes. Another work found in [14] applied neural networks for iris recognition. This method involves the extraction, normalization, and enhancement of eyes from images. The process is then applied across numerous images within a dataset, employing a neural network for iris classification. These approaches display the increasing utilization of machine learning in advancing the accuracy and efficiency of iris recognition systems.
The control cycle of a robot based on iris recognition involves specific steps tailored to the task for identifying individuals using their iris patterns. As shown in Figure 1, the process begins with capturing images of individuals’ faces using a camera system equipped with appropriate optics for iris imaging. This step is crucial to obtain clear and high-resolution images of the iris.
Once the images are captured, the robot’s software with version V1.1.0.0 analyzes them to locate the regions of interest corresponding to the irises within the images. This localization step involves detecting the circular shape of the iris and isolating it from the rest of the eye.
After localizing the iris region, the robot’s software segments the iris pattern from the surrounding structures, such as eyelids and eyelashes. This segmentation process ensures that only the iris pattern is considered for recognition, improving the accuracy. The robot’s software compares the extracted iris pattern with the templates in the database using similarity metrics or pattern-matching algorithms. Based on the degree of similarity or a predefined threshold, the system makes a decision regarding the identity of the individual. Once the closest-matched template is identified, the associated gaze direction is used to estimate the user’s current gaze direction.
The estimated gaze direction can be represented numerically (e.g., angles relative to the camera’s orientation) or categorically, as shown in Figure 2 (e.g., “left”, “right”, “up”, “down”, and “center”). Depending on the application, additional processing or calibration may be necessary to translate the estimated gaze direction into meaningful actions or commands for the robot or system being controlled. In addition, throughout this process, the robot provides feedback to the user, indicating whether the iris recognition was successful or if any errors occurred. In case of errors or unsuccessful recognition attempts, the system may prompt the user to retry or seek alternative authentication methods.
In our proposed model, the iris recognition system is structured around two principal stages: “iris segmentation and localization” and “feature classification and extraction” [15,16]. This systematic approach ensures a comprehensive process that involves accurately identifying and isolating the iris in the initial stage, followed by the extraction and classification of relevant features to facilitate robust and precise iris recognition.
The proposed approach in this work diverges from traditional methods by introducing a novel concept that combines advanced segmentation, search, and classification techniques for iris recognition. The proposed method aims to exhaustively explore various solutions by integrating these techniques for improved iris recognition performance. Iris localization is achieved through the application of fast gradient filters utilizing a fuzzy inference system (FIS). This process involves employing rapid gradient-based filtering techniques in conjunction with a fuzzy inference system to accurately identify and extract the iris region from its background. Then, the process of iris segmentation is performed utilizing the bald eagle search (BES) algorithm. This involves employing the BES algorithm to efficiently locate and delineate the boundaries of the iris region within an image, ensuring accurate segmentation for subsequent analysis and recognition tasks.
The initial step employs fast gradient filters using a fuzzy inference system to segment the iris into two distinct classes. Then, the bald eagle search (BES) algorithm is applied to identify and extract the iris region from its background. Subsequently, feature extraction is performed using the integration of DWT and PCA. Finally, the classification is carried out using the fuzzy KNN classifier.
Section 2 discusses the proposed iris recognition model. The results are presented in Section 3. Section 4 concludes the paper.

2. The Proposed Method

Iris recognition is a biometric technique used to identify individuals based on the unique features of their iris. It offers an automated method for authentication and identification by analyzing patterns within the iris. This process involves capturing images or video recordings of one or both irises using cameras or specialized iris scanners. Mathematical pattern recognition techniques are then applied to extract and analyze the intricate patterns present in the iris.
In this work, we are interested in identifying people by their iris. The proposed system is conceptually different and explores new strategies. Specifically, it explores the potential for combining advanced segmentation techniques and fuzzy clustering methods. This unconventional method offers a fresh perspective on iris recognition, aiming to enhance accuracy and efficiency through innovative algorithmic integration rather than incremental design improvements.
The proposed iris recognition method and finding the centroid location, as shown in Figure 3, are developed using four fundamental steps: (1) localization, (2) segmentation, (3) iris matching/classification, and (4) finding centroid location [16].
The iris localization step involves detecting the iris region within the human image. Following this, the images are segmented into two classes: iris and non-iris. The feature extraction phase is pivotal in the recognition cycle, where feature vectors are extracted for each identified iris from the segmentation phase. Accurate feature extraction is crucial for achieving precise results in subsequent steps.
Through the application of fast gradient filters utilizing a fuzzy inference system (FIS), iris localization algorithms can accurately identify the edges of the iris while effectively distinguishing them from surrounding structures, such as eyelids. This capability enables the precise segmentation of the iris, laying the foundation for subsequent processing steps in the iris recognition pipeline. By ensuring accurate segmentation, these algorithms enhance the match accuracy and overall performance of iris recognition systems. Then, the process of iris segmentation is accomplished through the utilization of the bald eagle search (BES) algorithm. This algorithm efficiently locates and delineates the boundaries of the iris within an image, ensuring accurate segmentation. By employing the BES algorithm, the iris region can be accurately isolated from its surrounding structures, enabling further analysis and recognition tasks in the iris recognition pipeline with enhanced precision and reliability.
In the iris recognition system, feature extraction is paramount for achieving high recognition rates and reducing the classification time. The efficiency of the feature extraction technique significantly impacts the success of recognition and classification tasks on iris templates. This study investigates the integration of the discrete wavelet transform (DWT) and principal component analysis (PCA) for feature extraction from iris images.
The proposed technique aims to generate iris templates with reduced resolutions and runtimes, optimizing the classification process. Initially, the DWT is applied to the normalized iris image to extract features. The DWT enables the capture and representation of essential image characteristics in a multi-resolution framework, facilitating efficient feature extraction.
By leveraging the DWT and PCA, the proposed method enhances the effectiveness of feature extraction from iris images, resulting in improved recognition accuracy and reduced computational overhead. The integration of these techniques enables the generation of compact, yet informative, iris templates, contributing to the overall performance enhancement of the iris recognition system.
During the recognition phase, the matching and classification of image features are conducted based on a fuzzy KNN algorithm to determine the identity of an iris in comparison to all the template iris databases. This comprehensive process ensures that iris recognition is performed accurately and effectively, providing reliable results for identity verification or authentication purposes.
After the iris recognition step and the detection of the iris’s boundaries, the localization of the center of the iris is a crucial step, especially for estimating the gaze direction. Once the center of the iris is accurately localized, it can serve as a reference point for determining the direction in which a person is looking. This information is then used to control the robot in the desired direction. The flowchart shown in Figure 4 depicts the stages of the proposed iris recognition.

2.1. Iris Localization Through Fast Gradient Filters Using a Fuzzy Inference System (FIS)

Iris localization is, indeed, a critical step in iris recognition systems, as it significantly impacts the match accuracy. This step primarily involves identifying the borders of the iris, including the inner and outer edges of the iris, as well as the upper and lower eyelids.
Detecting edges in digital images through fast gradient filters using a fuzzy inference system (FIS) involves combining traditional gradient-based edge detection techniques with fuzzy logic to improve the edge detection performance, particularly in noisy or complex image environments [17].
Fast gradient filters compute the gradient of the image intensity to identify edges. They are commonly used because of their simplicity and effectiveness. The second derivative ( G ) of an image (I) is typically calculated using convolution with kernel K as follows:
G = I K
The Laplacian operator is utilized for edge detection and image enhancement tasks. The Laplacian edge detector uses only one kernel. It calculates second-order derivatives in a single pass. Two commonly used small kernels are
K = 0 1 0 1 4 1 0 1 0 K = 1 1 1 1 8 1 1 1 1
In this work, we will focus on adjusting a single parameter, such as the threshold used in edge detection, based on the fuzzified input of the gradient magnitude. A fuzzy inference system (FIS) is utilized to adaptively adjust parameters or thresholds of fast gradient filters based on fuzzy rules and input variables. The four main components of the fuzzy inference system (FIS) are depicted in Algorithm 1.
Algorithm 1: The fuzzy inference system (FIS).
Step 1: Fuzzification
Fuzzification converts the gradient magnitude (G) to fuzzy sets using linguistic variables such as “low”, “medium”, and “high”.
L o w : G 0.3 Medium : 0.3 < G 0.7 H i g h : G > 0.7
Step 2: Fuzzy Rule Basis
The following set of fuzzy rules is based on the gradient magnitude:
If G is Low then Decrease Threshold.
  If G is Medium then Maintain Threshold.
If G is High then Increase Threshold.
Step 3: Fuzzy Inference
Fuzzy inference determines the degree to which parameters or thresholds should be adjusted based on the fuzzy rules and fuzzified inputs. To do this, a Mamdani-type fuzzy inference system with trapezoidal membership functions and the max–min inference method is used, as shown in the following Figure 5:
Figure 5. Mamdani-type fuzzy inference system.
Figure 5. Mamdani-type fuzzy inference system.
Jimaging 10 00288 g005
Step 4: Defuzzification
Defuzzification converts the fuzzy output to a crisp value representing the adjusted parameters or thresholds for edge detection. For simplicity, the centroid method for defuzzification is used to obtain the adjusted thresholds. The centroid of a fuzzy set ( A ) with membership function μ A ( x ) over the universe of discourse X is given by the following equation:
c e n t r o i d A = X x μ A ( x ) d x X μ A ( x ) d x

where the universe of discourse X 0,1 .
In the context of fuzzy inference systems for edge detection, Equation (5) would be applied to the fuzzy output membership function representing the degree of adjustment for the threshold. The integral is taken over the universe of discourse of the output variable, and the resulting value represents the crisp output (adjusted threshold) obtained through defuzzification. The adjusted thresholds are then applied to the fast gradient filters to perform edge detection on the input image.
By applying fast gradient filters using a fuzzy inference system (FIS), iris localization algorithms can effectively identify the edges of the iris and differentiate them from surrounding structures, such as eyelids. This facilitates accurate segmentation and subsequent processing steps in the iris recognition pipeline, ultimately improving the match accuracy. Figure 6 presents the localization of the iris through the fast gradient filters using a fuzzy inference system (FIS) and the reference edge detection.
The fuzzy inference system (FIS) offers several key advantages in edge detection compared to classical gradient-based methods and even some machine-learning-based edge detectors, improving segmentation accuracy in various ways.
Gradient-based methods rely on sharp intensity changes in the image to detect edges, and they may struggle in areas with noise, texture, or soft transitions, leading to the over-detection or under-detection of edges. These methods apply fixed thresholds to gradients like in Sobel or Canny edge detectors, which can fail in complex regions of the image, where the contrast varies. These methods also rely on fixed operations, such as convolution with specific kernels, which may not work well for all types of images or edge types.
Although the fuzzy inference system is inherently designed to handle uncertainty and ambiguity by employing fuzzy logic, it represents image pixels using degrees of membership in different edge categories, such as strong edges, weak edges, and no edge, making it more robust in dealing with gradual transitions and noisy data. This helps in identifying edges that are not distinctly clear in gradient-based approaches. The FIS can effectively suppress noise by considering the strength of edge features through fuzzy rules without blurring significant edges. This allows for maintaining sharp edges while suppressing noise and irrelevant details, improving the segmentation accuracy. In addition, FIS is highly customizable through its rule-based approach. By defining tailored fuzzy rules for a specific image domain, FIS can be fine-tuned to detect the most relevant edges, enhancing the segmentation accuracy for the given context.
Although machine-learning methods, like deep learning, can be very effective in edge detection, they often make hard decisions based on the learned model. The fuzzy inference system does not rely on binary, hard-threshold decisions. Instead, it uses soft decision boundaries, which are more natural for many real-world images, where edges are not strictly binary. This soft decision-making process improves the detection of subtle or weak edges that machine-learning-based detectors might overlook.

2.2. Iris Segmentation Using Bald Eagle Search (BES) Algorithm

A novel method for iris segmentation uses a bald eagle search (BES) algorithm [18]. This approach consists of three main stages: select, search, and swoop. During the select phase, the algorithm thoroughly explores the entirety of the available search space in order to locate potential solutions, while the search phase focuses on exploiting the selected area, and the swoop phase targets the identification of the best solution.
In the select stage, bald eagles identify and select the best area within the selected search space, where they can hunt for prey. This behavior is expressed mathematically through Equation (6).
P i , n e w = P b e s t +   r ( P m e a n P i )
where i is the total number of search agents, is the parameter for controlling the changes in position and takes a value between 1.5 and 2 , and r is a random number that takes a value between 0 and 1 .
In the selection stage, an area is selected based on the available information from the previous stage. Another search area is randomly selected that differs from but is located near the previous search area. P b e s t denotes the search space that is currently selected by bald eagles based on the best position identified during their previous search. The eagles randomly search all the points near the previously selected search space.
The current movement of the bald eagles is calculated by multiplying the randomly explored prior information by the factor α . This procedure introduces random changes to all the search points.
P m e a n indicates that all the information from the previous points has been used. The current movement is determined by multiplying the randomly searched prior information by α. This process randomly changes all the search points.
In the search stage, the search process is within the selected search space and moves in different directions within a spiral space to accelerate the bald eagles’ search. The best position for the swoop is mathematically expressed in Equation (7).
P i , n e w = P i + y i P i P i + 1 + x i P i P m e a n
x i = x r ( i ) max ( x r )
y i = y r ( i ) max ( y r )
x r i = r i s i n ( θ ( i ) )
y r i = r i c o s ( θ ( i ) )
θ i = a π r a n d
r i = θ i + R r a n d
where a is a parameter that takes a value between 5 and 10 for determining the corner between the point search at the central point, R takes a value between 0.5 and 2 for determining the number of search cycles, and r a n d has a value between 0 and 1.
During the swoop stage, bald eagles swing from the optimal position within the search space to their target prey. Additionally, all the points within the space converge toward the optimal point. Equation (14) provides a mathematical representation of this behavior.
P i ,   n e w =   r a n d P b e s t + x 1 i P i c 1 P m e a n + y 1 i P i c 2 P b e s t
x 1 i = x r ( i ) max ( x r )
y 1 i = y r ( i ) max ( y r )
x r i = r i s i n h ( θ ( i ) )
y r i = r i c o s h ( θ ( i ) )
θ i = a π r a n d
r i = θ i
where c 1   and   c 2 are the algorithmic numbers having values in the range [ 1,2 ] . Lastly, the final solutions in P are reported as the final population and the best solution obtained in the population for solving the problem.

2.3. Feature Extraction and Matching

Feature extraction is the most important and critical part of the iris recognition system. The successful recognition rate and reduction in classification time of two iris templates mostly depend on an efficient feature extraction technique. This work explores the integration of the DWT and PCA for feature extraction from images [19].
In this section, the proposed technique produces an iris template with a reduced resolution and runtime for classifying iris templates. To produce the template, first, the DWT is applied to the normalized iris image. Feature extraction using the discrete wavelet transform (DWT) is used in this work to capture and represent important characteristics of images in a multi-resolution framework.
The first step involves decomposing the image into four frequency sub-bands, namely, LL (low–low), LH (low–high), HL (high–low), and HH (high–high) using the DWT. The DWT achieves this by passing the signal through a series of low-pass and high-pass filters, followed by down-sampling.
The LL sub-band represents the features or characteristics of the iris so that this sub-band can be considered for further processing.
Figure 6a shows that the resolution of the original iris image is 256 × 256. After applying the DWT to a normalized iris image, the resolution of the LL sub-band is 128 × 128. The LL sub-band represents the lower resolution approximation of the iris with the required feature or characteristics, as this sub-band is used instead of the original normalized iris data for further processing using PCA. As the resolution of the iris template has been reduced, the runtime of the classification will be similarly reduced.
In the second step, PCA finds the most discriminating information presented in the LL sub-band to form the feature matrix, and the resultant feature matrix is passed to the classifier for recognition. The mathematical analysis of PCA includes the mean of each vector of the matrix LL of size ( N × M ) , which is given by the following equation:
x m = 1 N k = 1 N x k  
The mean is subtracted from all the vectors to produce a set of zero-mean vectors, which is given by the following equation:
x z = x i x m  
where x z is each zero-mean vector, x i is each element of the column vector, and x m is the mean of each column vector.
The covariance matrix is computed using the following equation:
C = x z T x z
The eigenvectors and eigenvalues are computed using the following equation:
C γ i e = 0
where γ is the eigenvalue, and e is the eigenvector.
Each eigenvector is multiplied by a zero-mean vector ( x z ) to form the feature vector. The feature vector is given by the following equation:
f i = x z e
In iris recognition, a similarity measure is utilized to quantify the resemblance between two iris patterns based on the features extracted from them. Various techniques are employed for comparing irises, each with its own advantages and limitations. For the purpose of classification, the fuzzy K-nearest neighbor algorithm seems to be an intriguing approach for iris recognition.
The fuzzy KNN algorithm revolves around the principle of membership assignment [20]. Similar to the classical KNN algorithm, this variant proceeds to find the k-nearest neighbors of a test dataset from the training dataset. It then proceeds to assign “membership” values to each class found in the list of k-nearest neighbors.
The membership values are calculated using a fuzzy math algorithm that focuses on the weight of each class. The formula for the calculation is as follows:
μ i P = j = 1 N μ i j ( 1 d ( P i , f j ) 2 m 1 ) j = 1 N ( 1 d ( P i , f j ) 2 m 1 )
where P is the test pattern, and m = 2 .
Finally, the class with the highest membership is then selected for the classification result.

2.4. Locating the Center of the Iris

Once the iris boundary is detected, locating the center involves finding the centroid of the segmented iris region. This can be achieved using the following equations:
C x = 1 N i = 1 N x i
C y = 1 N i = 1 N y i
where ( x i , y i ) are the coordinates of the iris boundary points, and N is the total number of boundary points.
These equations provide a basic framework for iris segmentation and center localization. However, it is important to note that implementation details may vary depending on factors such as image quality, noise levels, and the specific requirements of the application. Adjustments and optimizations may be necessary to achieve optimal segmentation and center localization performance.
To select the desired direction of the gaze from the iris’s center coordinates, ( C x , C y ), the Euclidean distance is calculated between the iris center and the coordinates representing each direction of the gaze (up, left, middle, right, down, and closed). Then, the direction with the lowest Euclidean distance is chosen as the desired direction. Samples of the coordinates representing the direction (middle, right, and left) are presented in Figure 7. To calculate the Euclidean distance ( E D ) between the iris center, ( C x , C y ), and the coordinates representing the “right” direction, you can use the following formula:
E D R i g h t = ( C x x R i g h t ) 2 + ( C y y R i g h t ) 2

3. Experimental Results and Discussion

To evaluate the efficiency and accuracy of the proposed iris recognition method, experiments are conducted comparing its performance against those of existing methods, as described earlier. These experiments are typically carried out using MATLAB software version 10, which provides a comprehensive environment for image processing and feature extraction, classification, and evaluation.
For the performance analysis, the iris images in the CASIA Iris database are initially stored in gray-level format, utilizing 8 bits, with integer values ranging from 0 to 255. This format allows for the efficient representation of grayscale intensity levels, facilitating subsequent image processing and analysis.
The CASIA Iris database is a significant dataset comprising 756 images captured from 108 distinct individuals. As one of the largest publicly available iris databases, it offers a diverse and comprehensive collection of iris images for evaluation purposes. The database encompasses variations in factors such as illumination conditions, occlusions, and pose variations, making it suitable for assessing the robustness and accuracy of iris recognition algorithms in real-world scenarios.
After performing the image segmentation detailed in Section 2.2, the homogeneous areas within each image were acquired. The BES algorithm is employed to handle a predetermined number of regions in an image (specifically, the iris and pupil) for segmentation purposes.
Figure 8 displays the segmentation results for an example image from the database. In this figure, (a) depicts the original image, while (d) illustrates its regional representation. The segmentation results demonstrate that the two regions were accurately segmented through the bald eagle search (BES) algorithm.
To evaluate the segmentation accuracy, a segmentation sensitivity criterion is employed to ascertain the number of correctly classified pixels. This criterion measures the ability of the segmentation algorithm to accurately identify and delineate regions of interest within the image. By comparing the segmented regions to ground truth annotations or manually labeled regions, the segmentation sensitivity criterion quantifies the accuracy of the segmentation process. This evaluation metric provides valuable insights into the performance of the segmentation algorithm, enabling researchers to assess its effectiveness and reliability in accurately partitioning images into meaningful regions.
Figure 9 shows sample iris images from the evaluation dataset, comprising 756 images sourced from the CASIA database. These images serve as representative examples utilized in the evaluation process. For the segmentation of the test database, the computational effort required was significant. The segmentation process encompassed a total duration of 5.5 h to process all 756 images. On average, the segmentation algorithm took approximately 1.9 s to process each individual image. The computational time required for segmentation is an important consideration, as it directly impacts the efficiency and feasibility of the iris recognition system and the control of the robot. Although the segmentation process may be time consuming, achieving accurate and reliable segmentation results is crucial for the subsequent stages of feature extraction, matching, and classification.
The segmentation sensitivities of some existing methods (FAMT [21], FSRA [22], BWOA [23], and the bald eagle search (BES) algorithm [18]) are shown in Table 1. It can be seen from Table 1 that 31.77%, 20.44%, and 2.73% of the pixels were incorrectly segmented using FAMT [21], FSRA [22], BWOA [23], and the bald eagle search (BES) algorithm [18], respectively.
Indeed, these experimental results indicate that the BES algorithm surpasses existing methods in terms of segmentation accuracy [24]. The optimal segmentation of the two regions is achieved through the BES algorithm.
We calculated the segmentation sensitivity as follows:
S e n % = N p c c M × N × 100
where N p c c is the number of correctly classified pixels, and M × N is the size of the image.
The comprehensive analysis conducted in this study involved randomly dividing the 756 images into training and test datasets.
The dataset is partitioned into training and test subsets in a 4:3 ratio. Specifically, 432 images are randomly chosen for the training set, while 324 images are selected from all the cases to form the test set. Within the training set, four iris images are selected for each subject to facilitate feature extraction.
Depending on the total number of images chosen, the training set may contain 108, 216, 324, or 432 images. Importantly, for each individual, irises with matching indices are chosen for both the training and test subsets to ensure consistency in the evaluation process.
To reduce the dimensionality of the training set, a subset of feature vectors is randomly selected. Feature vectors corresponding to these selected features are then utilized to construct a smaller training set. This approach aims to minimize the computational complexity required by the FKNN classifier, as the reduced feature set results in fewer operations during classification.
Let X be an original feature matrix with dimensions ( N × N ) , where N is the number of feature vectors. Let X 60 = { x 1 ,   x 2 ,   ,   x 60 } ,   X 100 = { x 1 ,   x 2 ,   ,   x 100 } , and X 128 = { x 1 ,   x 2 ,   ,   x 128 } be subsets of feature vectors randomly selected and containing 60, 100, or 128 feature vectors, respectively. As depicted in Figure 10, it is clear that augmenting the quantity of training images leads to an improvement in recognition accuracy. When applying the proposed method with a total of 432 training images (four images per individual) and utilizing 128 feature vectors, Figure 9 illustrates that the recognition performance achieves a peak level of 99.3827%.
However, we used the iris recognition rate in our evaluation [25]. We calculated the iris recognition rate as follows:
I R R % = T N F T N F R T N F × 100
where
IRR%: the facial recognition rate;
TNF: the total number of faces;
TNFR: the total number of false recognitions.
Furthermore, numerical comparisons of the equal error rate (EER), iris recognition rate (IRR) [25], true positives (TPs), false positives (FPs), and false negatives (FNs) are provided as benchmarks against various contemporary techniques in the facial recognition literature.
Figure 11 presents a numerical comparison of iris recognitions utilizing various methods, including the sped-up robust feature (SURF) method [10], the log-Gabor wavelets and Hamming distance (LGWHD) method [11], and the local intensity variation (LIV) method [12], in the CASIA database. In this comparison, 57% of the samples for each individual are allocated to the training set, while the remaining 43% are utilized in the test set.
A false positive occurs when the system incorrectly identifies a non-iris object or noise as a part of the iris. In the proposed system, the fuzzy inference system (FIS) or bald eagle search (BES) algorithm may not perfectly differentiate the iris from the surrounding areas, such as the sclera, eyelids, or eyelashes. This is particularly challenging in non-ideal lighting or in cases of poor image quality. Enhancing the preprocessing phase to remove occlusions and improve image quality under different conditions, along with more adaptive segmentation techniques, can help reduce FNs.
Interestingly, the challenge of mitigating false positives is not limited to iris recognition. Similar issues have been tackled in other domains, such as motion detection. Recent work on elementary motion detection for analyzing animal movement in a landscape [26] illustrates a parallel approach. In their study, the researchers developed a model based on elementary jumps at each time-discretized step and applied machine-learning techniques to distinguish between different motion models, such as diffusive motion. That study highlights the effectiveness of combining motion-detection algorithms with machine-learning methods to reduce misidentifications and improve model accuracy. In the context of our proposed system, incorporating techniques similar to those used in motion detection [26] could extend the applicability of the iris recognition algorithm. By leveraging concepts such as elementary motion detection and applying machine-learning-based corrections to account for environmental noise or poor image quality, the algorithm could improve its ability to differentiate between the iris and surrounding areas. This would help to reduce false positives, especially under challenging conditions, and enhance the overall performance of the system.
Recent research has proposed several methods for reducing false positives in biometric systems, particularly iris recognition. One promising approach involves adaptive thresholding [27], which dynamically adjusts the algorithm’s sensitivity based on environmental conditions and real-time data. This helps to minimize the effects of noise and poor image quality by ensuring that the detection threshold adapts to varying light intensities and object contrasts. Another effective technique is the use of spatial and temporal filtering [28], which helps to suppress noise and improve the accuracy of the object detection by analyzing patterns over time or across regions of the image. By considering changes in pixel intensity over successive frames, these filters can distinguish between genuine motion and random noise. Machine-learning (ML) techniques [29] have also been applied to refine iris detection algorithms. For instance, convolutional neural networks (CNNs) can be trained on large datasets to learn the distinguishing features of the iris, minimizing the chances of false positives by developing a more nuanced understanding of the object’s characteristics. Additionally, ensemble-learning methods, which combine multiple ML models to arrive at a final decision, can further improve detection accuracy by cross-referencing results from different algorithms.
For effective error analysis, evaluating the accuracy, false positives, false negatives, and misclassifications is essential for understanding the limitations of the system. The causes often arise from segmentation errors, occlusions, lighting variations, and noisy input data. By focusing on robust segmentation algorithms, improving image preprocessing, and enhancing the matching process, these errors can be reduced, leading to a more reliable iris-based control system for robotic interfaces.
Based on the results shown in Figure 11, a total of 322 images were identified as true positives (TPs), meaning that these images were correctly classified, and the iris was accurately recognized and matched. In contrast, two images were categorized as false positives (FPs), where non-iris elements were incorrectly identified as iris features, leading to misclassifications. Additionally, 19 images were reported as false negatives (FNs), indicating that the system failed to recognize the iris in these instances because of segmentation errors, occlusions, or poor image quality.
The overall iris recognition rate achieved using the system stands at an impressive 99.3827%, demonstrating the high accuracy and reliability of the proposed methodology. This high recognition rate highlights the effectiveness of the combined segmentation techniques (using the fuzzy inference system and bald eagle search algorithm) and the fuzzy KNN-based matching process in accurately identifying and isolating the iris region across a diverse dataset of images. Despite the minimal false positives and false negatives, these results suggest the system’s robustness, with only a small margin for potential improvements to further enhance precision and reduce the occurrence of misclassifications.
Additionally, Figure 12 provides intuitive comparisons among supervised learning based on matching features (SLMF) [30], the local invariant feature descriptor (LIFD) [31], the Fourier–SIFT method (FSIFT) [32], the fuzzified image filter and capsule network method [33], the 1D log-Gabor and 2D Gabor filter and discrete cosine transform method [34], the Canny edge detection CHT and CNN method [35], and the proposed method in the CASIA iris database in terms of the equal error rate (EER) and iris recognition rate (IRR).
As shown in Figure 12, the proposed method achieves an equal error rate (EER) of 0.1381% and an iris recognition rate (IRR) of 99.3827%. Particularly, the proposed method significantly outperforms other approaches according to the numerical comparison method. Furthermore, from Table 2, it is evident that the false positive rate (FPR) is 0.6172%, indicating a higher accuracy rate achieved using the proposed method.
The primary achievement of this work is the successful development of a system that accurately detects the iris centroid from an eye image and translates its movement to commands for robotic control. The system achieves a high detection rate of iris centroids, with an average error rate of less than 0.14% in iris localization. In addition, the system functions reliably under varying lighting conditions and with different eye shapes and sizes, enhancing its versatility.
Compared to the latest works in the field of eye-gaze-based control systems, this approach offers several improvements. Recent literature has typically reported an average error rate in iris centroid detection ranging from 0.17% to 0.43%, whereas our system reduces this error rate to less than 0.14%.
The novelty of this work lies in its combination of low-cost, high-precision iris detection with real-time robotic control, specifically tailored to assist individuals with physical disabilities. The use of a centroid-based approach for iris tracking simplifies the computation while maintaining precision, which is critical for real-time applications. Furthermore, unlike most gaze-control systems that focus on eye direction, this work emphasizes the iris position, making it more intuitive for users.
The computational time required for segmentation and classification is an important consideration, as it directly impacts the efficiency and feasibility of the iris recognition system and the control of the robot.
On average, the segmentation algorithm required around 1.9 s to process each image, and the classification algorithm took approximately 1.3 s for iris recognition and the localization of the iris center. The computational times for both segmentation and classification are crucial for assessing the feasibility of real-time operation in robotics. The relatively short processing times observed in the experiments indicate promising potential for real-time operation. However, further optimization may be necessary to achieve even faster processing speeds, particularly for applications requiring rapid responses.
In Figure 13, the movement of the robot is determined by the position of the center of the iris relative to different directional points. In Figure 13a, the robot moves forward because the distance between the center of the iris and the middle direction is minimal. This suggests that the robot perceives the iris to be centered and thus moves straight ahead. Figure 13b demonstrates that the robot moves rightward. This decision is made because the distance between the center of the iris and the right direction is minimal, indicating that the iris is off-center to the left from the robot’s perspective. To correct this misalignment, the robot adjusts its trajectory to the right. Finally, in Figure 13c, the robot moves leftward. This adjustment is made because the distance between the center of the iris and the left direction is minimal, suggesting that the iris is off-center to the right from the robot’s viewpoint. Consequently, the robot corrects its trajectory by moving leftward. Consequently, the robot’s movement is guided by minimizing the distance between the center of the iris and predetermined directional points, allowing for it to navigate in different directions based on the perceived position of the iris.

4. Conclusions

This work introduces a novel approach to human iris recognition, integrating advanced segmentation techniques with fuzzy classification algorithms. The method comprises two primary phases. In the initial phase, fast gradient filters using a fuzzy inference system (FIS) are applied to precisely localize the iris within the original image. This crucial step, fundamental for the matching accuracy, primarily focuses on identifying the outer boundaries of the iris. Subsequently, efficient segmentation of these localized regions is achieved using the bald eagle search (BES) algorithm. This segmentation process enhances the delineation of iris regions, facilitating the subsequent extraction of essential iris characteristics crucial for representation and identification.
In addition, the fuzzy KNN algorithm is applied for the matching process. This algorithm is tailored to leverage the extracted features by integrating the DWT and PCA methods, enhancing its efficacy in classifying iris patterns and ultimately enabling the accurate identification of individuals. By integrating these methodologies, the proposed approach aims to achieve robust and precise iris recognition. The centroid’s position of the iris is then employed to issue commands for controlling a robot. This innovative approach harnesses iris movement as a form of communication and control, presenting a promising breakthrough in assisting individuals with physical disabilities.
The localization phase ensures the accurate identification of iris boundaries, while the segmentation and feature analysis phases enable the extraction of discriminative iris features. Finally, the fuzzy KNN algorithm enhances classification efficiency, contributing to reliable identification outcomes. The evaluation and testing of the CASIA database prove the tool’s validity and achieve its aim to recognize the human iris, which might require more attention.
The proposed method outperformed existing methods in qualitative and quantitative evaluations but had a long completion time because of the segmentation and classification algorithms. Future work should focus on the real-time performance. By optimizing the algorithms and hardware, the aim is to minimize the time required for iris recognition without compromising accuracy. Improvements could involve fine-tuning the model’s architecture, using techniques like network pruning or quantization, to reduce the model’s size and improve the inference speed. In addition, implementing hardware accelerators, like GPUs, TPUs, or FPGAs, can significantly boost the processing speed. We plan to test and integrate such hardware, especially when running on robots equipped with more powerful embedded systems. Similarly, we plan to use the convolutional neural networks or attention-based models trained on large gaze datasets to estimate the gaze direction more accurately and robustly under diverse conditions. These models are better suited to capture the subtleties of eye movement and head pose interactions.
Additionally, we propose integrating fabric-type actuators using point clouds through deep-learning techniques. Future work will incorporate predictive modeling of flexible electro hydrodynamic (EHD) pumps, using the KAN framework to further improve the system performance. Furthermore, data fusion techniques will be included in future work to fuse and aggregate data from different information sources, such as iris and fingerprint biometrics.

Author Contributions

S.B.C. is responsible for idea and methodology development, algorithm implementation and validation, and manuscript writing; R.H. and H.S. are responsible for supervision, idea and methodology discussion, algorithm check, and manuscript refinement; Apart from the above contributions, H.S. is also responsible for manuscript finalization. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

It declares that no data or materials are available for this research.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

FIS:fuzzy inference system
BES:bald eagle search
TSR:true success rate
PCA:principal component analysis
FKNN:fuzzy k-nearest neighbors
SIFT:scale-invariant feature transformation
SURF:sped-up robust feature
DWT:discrete wavelet transform
LL:low–low
LH:low–high
HL:high–low
HHhigh–high
ED:Euclidean distance
FAMT:fast algorithm for multi-level thresholding
FSRA:fast statistical recursive algorithm
TSOM:two-stage multi-threshold Otsu method
EER:equal error rate
IRR:iris recognition rate
TP:true positive
FP:false positive
FN:false negative
LGWHD:log-Gabor wavelet and Hamming distance
LIV:local intensity variation
ML:machine learning
CNNs:convolutional neural networks
SLMF:supervised learning based on matching features
LIFD:local invariant feature descriptor
FSIFT:Fourier–SIFT method
FIFCN:fuzzified image filter and capsule network
2DGF:2D Gabor filter 
CEDCNN:Canny edge detection and convolutional neural network
CEDHD:Canny edge detection with high definition

References

  1. Otti, C. Comparison of biometric identification methods. In Proceedings of the 2016 IEEE 11th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 12–14 May 2016; pp. 1–7. [Google Scholar]
  2. Sumalatha, A.; Rao, A.B. Novel method of system identification. In Proceedings of the 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, India, 3–5 March 2016. [Google Scholar]
  3. Byron, C.D.; Kiefer, A.M.; Thomas, J.; Patel, S.; Jenkins, A.; Fratino, A.L.; Anderson, T. Correction to: The authentication and repatriation of a ceremonial tsantsa to its country of origin (Ecuador). Herit. Sci. 2021, 9, 50. [Google Scholar] [CrossRef]
  4. Ortega, M.; Penedo, M.G.; Rouco, J.; Barreira, N.; Carreira, M.J. Retinal verification using a feature points-based biometric pattern. EURASIP J. Adv. Signal Process. 2009, 9, 235746. [Google Scholar] [CrossRef]
  5. Malgheet, J.R.; Manshor, N.B.; Affendey, L.S. Iris recognition development techniques: A comprehensive review. Complex. J. 2021, 2021, 6641247. [Google Scholar] [CrossRef]
  6. Daugman, J.G. How iris recognition works. IEEE Trans. Circ. Syst. Video Technol. 2004, 14, 21–30. [Google Scholar] [CrossRef]
  7. Alkoot, F.M. A review on advances in iris recognition methods. Int. J. Comput. Eng. Res. 2012, 3, 1–9. [Google Scholar] [CrossRef]
  8. Alonso-Fernandez, F.; Tome-Gonzalez, P.; Ruiz-Albacete, V.; Ortega-Garcia, J. Iris recognition based on SIFT features. In Proceedings of the 2009 First IEEE International Conference on Biometrics, Identity and Security (BIdS), Paris, France, 22–23 September 2009. [Google Scholar]
  9. Mehrotra, H.; Sa, P.K.; Majhi, B. Fast segmentation and adaptive surf descriptor for iris recognition. Math. Comput. Model 2013, 58, 132–146. [Google Scholar] [CrossRef]
  10. Ismail, A.I.; Ali, H.S.; Farag, F.A. Efficient enhancement and matching for iris recognition using SURF. In Proceedings of the 2015 5th National Symposium on Information Technology: Towards New Smart World (NSITNSW), Riyadh, Saudi Arabia, 17–19 February 2015; pp. 1–5. [Google Scholar]
  11. Masek, L. Recognition of human iris patterns for biometric identification. Univ. West. Aust. Sch. Comput. Sci. Softw. Eng. 2003, 4, 1–56. [Google Scholar]
  12. Ma, L.; Tan, T.; Wang, Y.; Zhang, D. Local intensity variation analysis for iris recognition. Pattern Recogn. 2004, 37, 1287–1298. [Google Scholar] [CrossRef]
  13. Saminathan, K.; Chithra, D.; Chakravarthy, T. Pair of iris recognition for personal identification using artificial neural networks. Int. J. Comput. Sci. Issues (IJCSI) 2012, 9, 324–327. [Google Scholar]
  14. Abiyev, R.; Altunkaya, K. Personal iris recognition using neural networks. Int. J. Secur. Its Appl. (IJSIA) 2008, 2, 41–50. [Google Scholar]
  15. Vytautas, V.; Bulling, A. Eye gesture recognition on portable devices. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; pp. 711–714. [Google Scholar]
  16. Sheela, S.V.; Vijaya, P.A. Iris recognition methods-survey. Int. J. Comput. Appl. 2010, 3, 19–25. [Google Scholar] [CrossRef]
  17. Ranjan, R.; Avasthi, V. Edge Detection in Digital Images Through Fast Gradient Filters using Fuzzy Inference System. In Proceedings of the IEEE World Conference on Applied Intelligence and Computing (AIC), Sonbhadra, India, 17–19 June 2022. [Google Scholar]
  18. Nicaire, N.F.; Steve, P.N.; Salome, N.E.; Grégroire, A.O. Parameter estimation of the photovoltaic system using bald eagle search (BES) algorithm. Int. J. Photoenergy 2021, 2021, 4343203. [Google Scholar] [CrossRef]
  19. Rana, K.; Azam, M.S.; Akhtar, M.R.; Quinn, J.M.; Moni, M.A. A fast iris recognition system through optimum feature extraction. PeerJ Comput. Sci. 2019, 5, e184. [Google Scholar] [CrossRef]
  20. Zhang, Q.; Sheng, J.; Zhang, Q.; Wang, L.; Yang, Z.; Xin, Y. Enhanced Harris hawks optimization-based fuzzy k-nearest neighbor algorithm for diagnosis of Alzheimer’s disease. Comput. Biol. Med. 2023, 165, 107392. [Google Scholar] [CrossRef] [PubMed]
  21. Liao, P.S.; Chen, T.S.; Chung, P.C. A fast algorithm for multi-level thresholding. J. Inf. Sci. Eng. 2001, 17, 713–727. [Google Scholar]
  22. Arora, S.; Acharya, J.; Verma, A.; Panigrahi, P.K. Multilevel thresholding for image segmentation through a fast statistical recursive algorithm. Pattern Recognit. Lett. 2008, 29, 119–125. [Google Scholar] [CrossRef]
  23. Huang, D.; Wang, C. Optimal multi-level thresholding using a two-stage Otsu optimization approach- ScienceDirect. Pattern Recognit. Lett. 2009, 30, 275–284. [Google Scholar] [CrossRef]
  24. Mbarki, Z.; Seddik, H.; Braiek, E.B. A rapid hybrid algorithm for image restoration combining parametric Wiener filtering and wave atom transform. J. Vis. Commun. Image Represent. 2016, 40, 694–707. [Google Scholar] [CrossRef]
  25. She, W.; Surette, M.; Khanna, R. Evaluation of automated biometrics-based identification and verification systems. Proc. IEEE 1997, 85, 1464–1478. [Google Scholar]
  26. Meyer, P.G.; Cherstvy, A.G.; Seckler, H.; Hering, R.; Blaum, N.; Jeltsch, F.; Metzler, R. Directedeness, correlations, and daily cycles in springbok motion: From data via stochastic models to movement prediction. Phys. Rev. Res. 2023, 5, 043129. [Google Scholar] [CrossRef]
  27. AlRifaee, M.; Almanasra, S.; Hnaif, A.; Althunibat, A.; Abdallah, M.; Alrawashdeh, T. Adaptive Segmentation for Unconstrained Iris Recognition. CMC-Comput. Mater. Contin. 2024, 78, 1591–1609. [Google Scholar] [CrossRef]
  28. Mashayekhbakhsh, T.; Meshgini, S.; Rezaii, T.Y.; Makouei, S. SRU-Net: A novel spatiotemporal attention network for sclera segmentation and recognition. Pattern Anal. Appl. 2024, 27, 90. [Google Scholar] [CrossRef]
  29. Shalaby, A.S.; Gad, R.; Hemdan, E.E.D.; El-Fishawy, N. An efficient CNN based encrypted Iris recognition approach in cognitive-IoT system. Multimed. Tools Appl. 2021, 80, 26273–26296. [Google Scholar] [CrossRef]
  30. Hernandez-Garcia, E.; Martin-Gonzalez, A.; Legarda-Saenz, R. Iris recognition using supervised learning based on matching Features. In Proceedings of the International Symposium on Intelligent Computing Systems, Universidad de Chile, Santiago, Chile, 23–25 March 2022; pp. 44–56. [Google Scholar]
  31. Jin, Q.; Tong, X.; Ma, P.; Bo, S. Iris recognition by new local invariant feature descriptor. J. Comput. Inf. Syst. 2013, 9, 1943–1948. [Google Scholar]
  32. Kumar, A.; Majhi, B. Isometric efficient and accurate Fourier-SIFT method in iris recognition system. In Proceedings of the 2013 International Conference on Communication and Signal Processing, Sharjah, United Arab Emirates, 3–5 April 2013; pp. 809–813. [Google Scholar]
  33. Khan, T.M.; Bailey, D.G.; Khan, M.A.U.; Kong, Y. Real-time iris segmentation and its implementation on FPGA. J. Real-Time Image Process. 2020, 17, 1089–1102. [Google Scholar] [CrossRef]
  34. Aiyeniko, O.; Adekunle, Y.A.; Eze, M.O.; Alao, O.D. Performance analysis of feature extraction and its fusion techniques for iris recognition system. Glob. J. Artif. Intell. 2020, 2, 7. [Google Scholar]
  35. Farouk, R.H.; Mohsen, H.; El-Latif, Y.M.A. A Proposed Biometric Technique for Improving Iris Recognition. Int. J. Comput. Intell. Syst. 2022, 15, 79. [Google Scholar] [CrossRef]
Figure 1. Control cycle of a robot based on iris recognition.
Figure 1. Control cycle of a robot based on iris recognition.
Jimaging 10 00288 g001
Figure 2. Image templates of the author, for different gaze directions, taken during initialization. During operation, the current eye image is matched against these templates to estimate the gaze direction.
Figure 2. Image templates of the author, for different gaze directions, taken during initialization. During operation, the current eye image is matched against these templates to estimate the gaze direction.
Jimaging 10 00288 g002
Figure 3. The proposed iris recognition method and finding the centroid location: (a) sample image of the author, (b) localization of the iris, (c) iris segmentation, (d) iris detection, and (e) finding the centroid location.
Figure 3. The proposed iris recognition method and finding the centroid location: (a) sample image of the author, (b) localization of the iris, (c) iris segmentation, (d) iris detection, and (e) finding the centroid location.
Jimaging 10 00288 g003
Figure 4. The diagram of the proposed method.
Figure 4. The diagram of the proposed method.
Jimaging 10 00288 g004
Figure 6. The localization of the iris: (a) original image (human eye), (b) edge detection through fast gradient filters using a fuzzy inference system (FIS), (c) the localization of the iris, and (d) image (2 regions: iris and background) segmented using the bald eagle search (BES) algorithm.
Figure 6. The localization of the iris: (a) original image (human eye), (b) edge detection through fast gradient filters using a fuzzy inference system (FIS), (c) the localization of the iris, and (d) image (2 regions: iris and background) segmented using the bald eagle search (BES) algorithm.
Jimaging 10 00288 g006
Figure 7. The coordinates representing each direction: (a) the iris’s center coordinates (( C x , C y )), (b) the middle direction, (c) the right direction, and (d) the left direction.
Figure 7. The coordinates representing each direction: (a) the iris’s center coordinates (( C x , C y )), (b) the middle direction, (c) the right direction, and (d) the left direction.
Jimaging 10 00288 g007
Figure 8. Image segmentation: (a) original image, (b) iris localization, (c) segmented image (three regions: iris, pupil, and background), and (d) segmented image (two regions: iris and background).
Figure 8. Image segmentation: (a) original image, (b) iris localization, (c) segmented image (three regions: iris, pupil, and background), and (d) segmented image (two regions: iris and background).
Jimaging 10 00288 g008
Figure 9. Examples of irises of the human eye. Twelve were selected for a comparison study. The patterns are numbered from 1 through 12, starting at the upper-left-hand corner. Images are from the CASIA iris database [8].
Figure 9. Examples of irises of the human eye. Twelve were selected for a comparison study. The patterns are numbered from 1 through 12, starting at the upper-left-hand corner. Images are from the CASIA iris database [8].
Jimaging 10 00288 g009
Figure 10. The proposed method’s rates of iris recognition based on the number of training images and feature vector dimensions.
Figure 10. The proposed method’s rates of iris recognition based on the number of training images and feature vector dimensions.
Jimaging 10 00288 g010
Figure 11. Iris recognition evaluation results: (a) true positive, (b) false positive, (c) false negative, and (d) iris recognition rate.
Figure 11. Iris recognition evaluation results: (a) true positive, (b) false positive, (c) false negative, and (d) iris recognition rate.
Jimaging 10 00288 g011
Figure 12. The recognition performances of supervised learning based on matching features (SLMF) [30], the local invariant feature descriptor (LIFD) [31], the Fourier–SIFT method (FSIFT) [32], the fuzzified image filter and capsule network method [33], the 1D log-Gabor and  2D Gabor filter and discrete cosine transform method [34], the Canny edge detection CHT and CNN method [35], and the proposed method in the CASIA iris database, (a) ERR (%), and (b) IRR (%).
Figure 12. The recognition performances of supervised learning based on matching features (SLMF) [30], the local invariant feature descriptor (LIFD) [31], the Fourier–SIFT method (FSIFT) [32], the fuzzified image filter and capsule network method [33], the 1D log-Gabor and  2D Gabor filter and discrete cosine transform method [34], the Canny edge detection CHT and CNN method [35], and the proposed method in the CASIA iris database, (a) ERR (%), and (b) IRR (%).
Jimaging 10 00288 g012
Figure 13. The movement of the robot in different directions: (a) moving forward, (b) moving rightward, and (c) moving leftward.
Figure 13. The movement of the robot in different directions: (a) moving forward, (b) moving rightward, and (c) moving leftward.
Jimaging 10 00288 g013aJimaging 10 00288 g013bJimaging 10 00288 g013c
Table 1. Segmentation sensitivities of FAMT [21], FSRA [22], BWOA [23], and BES [18] for the dataset shown in Figure 8.
Table 1. Segmentation sensitivities of FAMT [21], FSRA [22], BWOA [23], and BES [18] for the dataset shown in Figure 8.
FAMTFSRABWOABES
Image 197.664296.706797.761098.7395
Image 296.646195.698696.741997.7102
Image 397.701896.744097.798798.7775
Image 498.262797.299498.360199.5444
Image 595.737794.799195.832696.7917
Image 696.649095.701496.744797.7131
Image 796.512596.037196.608297.5751
Image 897.735897.254397.832798.8119
Image 998.662998.176997.791599.7492
Image 1095.542495.071894.698696.5943
Image 1195.618795.147694.774196.6714
Image 1296.622296.146395.768897.6860
Table 2. The iris recognition performances of different approaches in the CASIA database.
Table 2. The iris recognition performances of different approaches in the CASIA database.
MethodsEER (%)IRR (%)
SLMF method [30]0.178298.9438
LIFD method [31]0.432497.6227
FSIFT method [32]0.295698.6356
FIFCN method [33]0.223683.1248
2DGF method [34]0.175692.2214
CEDCNN method [35]0.344291.5619
CEDHD method [35]0.285494.8827
Proposed iris recognition method0.138199.3827
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ben Chaabane, S.; Harrabi, R.; Seddik, H. Iris Recognition System Using Advanced Segmentation Techniques and Fuzzy Clustering Methods for Robotic Control. J. Imaging 2024, 10, 288. https://doi.org/10.3390/jimaging10110288

AMA Style

Ben Chaabane S, Harrabi R, Seddik H. Iris Recognition System Using Advanced Segmentation Techniques and Fuzzy Clustering Methods for Robotic Control. Journal of Imaging. 2024; 10(11):288. https://doi.org/10.3390/jimaging10110288

Chicago/Turabian Style

Ben Chaabane, Slim, Rafika Harrabi, and Hassene Seddik. 2024. "Iris Recognition System Using Advanced Segmentation Techniques and Fuzzy Clustering Methods for Robotic Control" Journal of Imaging 10, no. 11: 288. https://doi.org/10.3390/jimaging10110288

APA Style

Ben Chaabane, S., Harrabi, R., & Seddik, H. (2024). Iris Recognition System Using Advanced Segmentation Techniques and Fuzzy Clustering Methods for Robotic Control. Journal of Imaging, 10(11), 288. https://doi.org/10.3390/jimaging10110288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop