Next Article in Journal
RLFAT: A Transformer-Based Relay Link Forged Attack Detection Mechanism in SDN
Next Article in Special Issue
Campania Crea—A Collaborative Platform to Co-Create Open Data and Scaffold Information Visualization within the Campania Region
Previous Article in Journal
Oracles Integration in Blockchain-Based Platform for Smart Crop Production Data Exchange
Previous Article in Special Issue
Recommending Words Using a Bayesian Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method for Predicting the Visual Attention Area in Real-Time Using Evolving Neuro-Fuzzy Models

1
Department of Computer Science, COMSATS University, Islamabad-Abbottabad Campus, Abbottabad 54000, Pakistan
2
Department of Computer Science, COMSATS University, Islamabad-Lahore Campus, Lahore 54000, Pakistan
3
Department of Higher Education, KPK, Abbottabad 54000, Pakistan
4
EIAS Datascience and Blockchain Lab, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(10), 2243; https://doi.org/10.3390/electronics12102243
Submission received: 15 March 2023 / Revised: 20 April 2023 / Accepted: 27 April 2023 / Published: 15 May 2023
(This article belongs to the Special Issue Visual Analytics, Simulation, and Decision-Making Technologies)

Abstract

:
This research paper presents the prediction of the visual attention area on a visual display using an evolving rule-based fuzzy model: evolving Takagi–Sugeno (eTS). The evolving fuzzy model is feasible for predicting the visual attention area because of its non-iterative, recursive, online, and real-time nature. Visual attention area prediction through a web camera is a problem that requires online adaptive systems with higher accuracy and greater performance. The proposed approach using an evolving fuzzy model to predict the eye-gaze attention area on a visual display in an ambient environment (to provide further services) mimics the human cognitive process and its flexibility to generate fuzzy rules without any prior knowledge. The proposed Visual Attention Area Prediction using Evolving Neuro-Fuzzy Systems (VAAPeNFS) approach can quickly generate compact fuzzy rules from new data. Numerical experiments conducted in a simulated environment further validate the performance and accuracy of the proposed model. To validate the model, the forecasting results of the eTS model are compared with DeTS and ANFIS. The result shows high accuracy, transparency and flexibility achieved by applying the evolving online versions compared to other offline techniques. The proposed approach significantly reduces the computational overhead, which makes it suitable for any sort of AmI application. Thus, using this approach, we achieve reusability, robustness, and scalability with better performance with high accuracy.

1. Introduction

Ambient Intelligence (AmI) aims at providing invisible support, learning, adaptive operations and services ubiquitously [1]. One of the primary challenges when deploying such a ubiquitous environment is awareness of human attention states [2] to provide several types of services. So far, its environment is equipped with sensors and devices (eye trackers, face readers, etc. [3]) capable of recognizing human emotion, starting from facial expressions [4] or through hand gestures, body movement and speech [5,6]. The sensed information can be used in different attention models or with a simple computational methodology [7] to figure out the user’s attention state/area. However, these sensing devices are costly and difficult to deploy. Alternatively, cheap/ordinary devices for the precise estimation of the visual attention area that work with the integration of computational methodologies are still desirable.
Computational intelligence techniques are useful to model the environmental context and the relationships among the events; thus, they are useful for an AmI environment. Hence, AmI-based computational intelligence has received significant research attention [8,9]. However, research and development in relation to AmI using computational methodologies/techniques are still challenging due to several reasons: measurement errors and non-linear relationships between (in) dependent data variables. In recent years, fuzzy rule-based computational logic [10] has become the state-of-the-art technique to link linear sub-models in linguistic If-Then rule statements to express non-linear relations among several data variables. Therefore, embedding fuzzy logic in an AmI environment should make the prediction task easy to accomplish for the previously mentioned reasons, which is otherwise difficult to achieve using conventional mathematical approaches. However, a problem with such linguistic If-Then rules is that the rules and membership functions are often generated by some human expert, which is a time-consuming, laborious, expensive, and cumbersome task. Therefore, data clustering techniques provide the grouping of the data automatically based on their similarity. Data clustering techniques can also be used to define the fuzzy rules and membership functions. Moreover, there is also a dire need to define data clusters for fuzzy logic in real-time, as the data are often non-stationary, which adapts the rule structure in dynamic conditions. Consequently, recently developed techniques have been purposed in [11,12,13,14,15,16] and more advanced research work in [17,18] concerning online and evolving clustering. These online and evolving classifiers [11,12,14] have evolved (shrunk or expanded) the model structure, which can adjust itself to a changing environment and thus can be worthy for the AmI visual attention prediction task.
A novel technique is proposed using adaptive neuro-fuzzy schemes to enhance the software evolution process. This technique can be easily incorporated with any refactoring scheduling and prioritization models, as it is efficiently designed using a fast-training scheme based on a neuro-fuzzy architecture [19].
In [20], a novel method was developed by guaranteeing that the photovoltaic (PV) system produces at its peak under varying sunlight and temperature conditions, and the maximum power point tracking (MPPT) plays a critical role in obtaining the maximum power from the system. The adaptive neuro-fuzzy inference system (ANFIS), which was created using a combination of an artificial neural network (ANN) and a fuzzy logic controller, was presented in this study as an algorithm-based MPPT (FLC). Under the conditions of temperature and irradiance change, the effectiveness of the ANFIS algorithm was evaluated in Matlab/Simulink and contrasted with the fixed-step traditional perturb and observe (P&O) and gradient descent procedures. The results demonstrated a notable improvement in the PV system’s performance when employing the ANFIS–MPPT technique, which also offered quicker convergence, stability in a steady state, and fewer oscillations around the MPP.
In [21], there was a deep learning-based technique for gaze estimation that was developed for solving the issue of car crash deaths due to the inattentiveness of the driver, with an emphasis on a WSN-based convolutional neural networks system.
A similar sort work has been addressed in [22], which discussed the main challenges and sources of errors associated with sensing visual attention on mobile device cameras in the wild environment.
In [23], the real-time eye-gaze estimation problem was addressed using a low-resolution ordinary camera opposed to expensive eye-gaze tracking techniques. In this research, a camera based on more non-invasive techniques has been used. The gaze-driven interface was designed for virtual interaction tasks to assess the performance and usability of the proposed framework.
The research presented in [24] defined a method based on electroencephalographic (EEG) signals for the measurement of attention in a 13-year-old boy who was diagnosed with autism spectrum disorder (ASD). So, here, the multi-layer perceptron neural network model (MLP–NN) has been used. The findings of this research made it possible to develop a better kind of learning scenario according to the needs of the person having ASD. Additionally, it also made it possible to get quantifiable information on their progress to strengthen the perception of the teacher or therapist.
Investigation of the acoustic features of both healthy and ill livestock was discussed in [25]. In particular, the platform incorporated secure audio-wellness features to impromptu assess and determine using livestock voice information to identify sick birds. Long-term recognition experiments have been conducted, with the set of sick birds being recognized with an accuracy of about 99% utilizing an adaptive neuro-fuzzy inference system (ANFIS). In this aspect, the performance of an artificial neural network is inferior to the recognition accuracy of the ANFIS. This is a trustworthy technique for researchers to conduct studies and gather proof of the curability of diseases or the elimination of those that are incurable.
This research work aims to propose a technique for predicting the visual attention area using a self-developing fuzzy rule-based classifier, the evolving Takagi–Sugeno (eTS) [14], and its further advanced form, the dynamically evolving Takagi–Sugeno (DeTS) [17,18]. The choice of applying the eTS and DeTS models is very suitable to deal with high levels of vagueness and the prediction of accurate coordinates on a display and can start with an empty rule base. Our proposed technique is based on a combination of theories and methodologies derived from the computational intelligence and psychology areas. Furthermore, this proposed forecasting technique is useful in such a kind of scenario where it is hard to determine the relationship between the dependent and independent variables. As discussed earlier, the AmI requirement can differ from one environment to another and from person to person, so writing separate models for all those environments is entirely inconvenient. This paper is thus novel in four ways: (i) an evolving version of the Takagi–Sugeno fuzzy model is adapted in order to be suitable for attention area application in AmI, (ii) the fuzzy rules are generated automatically, (iii) the fuzzy forecasting techniques are evaluated and compared in the context of visual attention area prediction with the help of an ordinary inexpensive web camera, and (iv) the results of various evolving models (eTS and DeTS) are compared with offline techniques (Adaptive Neuro-Fuzzy Inference System (ANFIS) [9,26] and SLR). Experimental result show our proposed Visual Attention Area Prediction using Evolving Neuro-Fuzzy Systems (hereafter VAAPeNFS) approach can predict the user’s attention area with high accuracy and with improved transparency and flexibility as compared to conventional techniques.
The rest of the paper is organized as follows. Section 2 presents the related work, while Section 3 introduces the basic concepts and background necessary to develop our proposed approach. Furthermore, a case study that is integrated with the proposed approach is described in Section 4. Section 5 is about the experimental set-up. Before concluding in Section 7, the performed experiments are presented in Section 6.

2. Related Work

The fuzzy rule-based systems can be categorized into two: (i) offline and (ii) evolving and online. The application of the offline type of fuzzy rule-based system for prediction has been of particular interest in various research domains. For instance, in [27] the authors presented an application to integrate the fuzzy logic into AmI. Further research work has been proposed with a focus on providing services using the computational intelligence techniques of AmI. The research in [28] has presented an enhanced type-2 fuzzy rule-based agent to control the environment on behalf of the users. The authors in [29] have provided health monitoring and assistance technology to help humans who live independently in their homes, with the design of an algorithm that recognizes their daily activities and then further classifies them as an error or inconsistent activity using Markov models. The following research article, Ref. [30], has proposed an in-home healthcare monitoring system by using fuzzy logic along with a set of rules—formalizing the medical recommendations used to fuse the various subsystem outputs. A more recent research study [31] used human behavior patterns to learn a scene or to predict future behaviors.
Another type of fuzzy rule-based approach is evolving Takagi–Sugeno (eTS) [11,12] systems that are evolving and online in nature, which have several applications in the real-time prediction research domains and have recently gained research attention. These types of fuzzy rule-based systems update dynamically the cluster structure for each new data sample and in a recursive way the consequent parameters of the resulting linear equations. Various state-of-the-art applications have been reported using these dynamic as well as evolving systems. For instance, ref. [32] reported the real-time prediction and online monitoring properties of the crude oil refinery distillation process products. In [33,34], a method was presented for the approximation of the forward and inverse dynamic behaviors of a magneto-rheological damper using the eTS system and a genetic algorithm. The identification problem is significantly difficult because of the highly non-linear dynamics in the magneto-rheological damper. In [35,36], an alternative approach was presented for the eTS system by integrating the Kalman filter for novelty and object detection as well as tracking in real-time video streams. Moreover, the authors in [37] have used the eTS system to predict the time series on non-linear benchmark transportation datasets to deal with a crane system. In [38,39], using the eTS system, the authors proposed to estimate and track the reservoir dynamic changes during the CO2 sequestration and for the online identification of a multi-input, multi-output distillation column, respectively. Furthermore, the eTS system and integrated probabilistic models were used to capture the multi-model and evolving nature of driving behavior in [40]. In another research problem [41] faced by one of the leading UK water supplying companies, the eTS system was used for predicting water leakage. All the previously mentioned literature sources are agreed that the eTS system can predict well and is convenient to design.
From the above discussion, it can be observed that fuzzy rule-based systems are used for various prediction tasks. However, the second type of fuzzy rule-based systems have produced significantly improved results and work in an online and evolving manner. However, to the best of our knowledge, one of the research domains where the eTS system has not be used yet is AmI for the prediction task. Therefore, this study aims to address this research gap by predicting the visual attention area in the AmI task. Using the eTS system [11,12] in the AmI task for predicting the visual attention area on multi-variate data thus provides a cheaper and more accurate application that can automatically adapt to various environments dynamically.

3. Proposed Approach

3.1. Evolving Takagi–Sugeno (eTS) Fuzzy Model

The evolving Takagi–Sugeno (eTS) model [12,14,16,42,43] is inspired by the Takagi–Sugeno fuzzy rule-based model [44], which has proven itself as a powerful tool for intelligent modelling and control of complex systems because of its non-linear characteristics. However, this model can also be treated as a linear model in respect of the consequent parameters. Furthermore, this model works in a dynamically and evolving mode (number of rules/sub-models is not fixed and pre-defined as well as online). The eTS model is based on a combination of unsupervised online clustering and of weighted Recursive Least Squares (wRLS) for the locally and globally optimal case estimation of parameters in the consequent part. Furthermore, the eTS model is based on evolving clustering (eClustering), which stems from the mountain and subtractive clustering but assumes evolving clusters (does not require the number of clusters to be pre-specified). It allows the gradual updating of the antecedent part of the fuzzy If-Then rules, whereas the consequent parameters are updated using global or local learning. In addition, eClustering does not use the mean of the data points but rather real data (prototype based); hence, it is one-pass, non-iterative, and recursive. A recent modified form, eClustering+ [45], does not have any user- or problem-specific threshold problems. Therefore, the resulting cluster centers are used to define the focal points of rules in these eClustering(+) model(s), and each cluster corresponds to one fuzzy rule (see Equation (1)).
R k : I F x m   i s   k 1   A N D   A N D x m   i s   k p   T H E N   x g = a k 1 + a k 2 x 1 + + a k l x n ; k = 1 , R
where R k denotes the k t h fuzzy rule, R is the number of fuzzy rules, x m is the input variable, k p denotes the antecedent fuzzy sets, p = 1 , n ; x g is the output of the k t h linear subsystem, and a k l are its parameters, l = 0 , n .

3.2. Dynamically Evolving Takagi–Sugeno (DeTS) Fuzzy Model

The dynamically evolving Takagi–Sugeno (DeTS) model [17,18] is another fuzzy rule-based model, which combines Dynamically Evolving Clustering (DEC) with the eTS model. DEC uses density and distance measures to generate a new cluster. It is evolving, does not need to identify the numbers of clusters at the start, fast, recursive, incremental and memory-efficient. The density decays exponentially with time, so the dynamics of the data streams can be captured. The data are distributed between “core” and “non-core” clusters, and “non-core” clusters (clusters with low density) are used to identify the real outliers [28].

3.3. The eTS Model for Eye-Gaze Prediction

The proposed approach uses the eTS fuzzy-rule based model for online identification of eye-gaze prediction and contains the following steps, as defined in [15,42,45].
1.
In the beginning, the rule base is initialized with a single rule, i.e., R = 1. Furthermore, the eClustering+ approach (see Section 3.1) is used to obtain the fuzzy model parameters using the first available data point, x k , and with Equation (2) [45]:
c j k = c j k 1 λ j ( x k ) c j k 1 x e k x e k T c j k 1 1 + λ j ( x k ) x e k T c j k 1 x e k
where c l is the center of the cluster l , x 1 * the centre of rule1 and represents a projection of the data point c l on the x -axis.
2.
In next time step, the next data point x k is read and k is set to k: = k + 1.
3.
Following the previous step, the potentials of all the new data points are recursively calculated in the next step using Equation (3), as defined in [11]:
P k ( x k ) = k 1 ( k 1 ) ( ϑ k + 1 ) + σ k 2 v k
4.
The potentials of the centers for all the existing rules are updated recursively using Equation (4), as described in [42]:
P k c l = ( k 1 ) P k 1 ( c l ) k 2 + P k 1 ( c l ) + P k 1 ( c l ) j = 1 g + 1 ( c l j x k j ) 2
5.
In the next step, modification of the rule-based structure is performed using the potential of the new data points. Furthermore, the new calculated potential is compared with the existing rule centers and the rule-base structure is modified if the following condition (a) is matched. In addition, if condition b is match, a new rule is inserted using the step described in [41,42].
a.
If the potential of the new data is greater than that of the existing centers, i.e.,
P k ( x k ) > P k ( c l ) , l = 1,2 , . . . . , R
As well as if the new data point is close to an old cluster in terms of the center, i.e., [24]:
P k ( x k ) m a x j = 1 R P k ( c l ) δ m i n r c
where δ m i n can be calculated as:
δ m i n j = 1 R x i j c l 2 m i n
where δ m i n is the distance between the new data point x i j which is close to all the existing rule centers l , and parameter r c determines the radius of the vicinity [46]. The new data point ( x k ) replaces the old one. This new data point plays the role of a prototype for the rule center.
c l * = a r g m i n j = 1 R p k p j
The new center is characterized by:
x j * = x k , P k ( c l * ) = P k ( c l )
The parameters in the rule consequents and the covariance matrices are kept from the rule that is being replaced using the following equation [16,45]:
π ~ k = π ~ k j , C k = C k j
b.
If the potential of the new data point is greater than the existing centers and Equations (13) and (14) do not match, then a new data point is inserted into the rule base in terms of a rule as:
R : = R + 1 , x R * = x k , P k ( c R * ) = P k ( x k )
The parameters in the rule consequents and the covariance matrices are reset for the global or local estimation
6.
The rule consequent parameters are updated recursively by means of the RLS as from [46]:
C k = C k 1 C k 1 Ψ k 1 Ψ k 1 T C k 1 1 + Ψ k 1 T C k 1 Ψ k 1
7.
Finally, the eTS fuzzy rule-base model output is predicted.

4. Case Study

4.1. Problem Statement for the Visual Attention Area Forecasting Problem

As mentioned previously (see Section 1), the primary aim of AmI is to provide services ubiquitously. One way is to find the user attention area to provide nearby services. Among others, in the past literature, the eye-gaze is the most widely used cue to find out the user attention area to provide numerous services in AmI applications, e.g., as cited in [47,48,49]. However, human eye-gaze attention is dynamic, non-linear, and non-stationary [3,9]. Therefore, offline and fixed model structure approaches have severe limitations (see Section 2 and Section 3); thus, they are not adoptable for the eye-gaze attention type of task. This research work aims to investigate whether one can detect the visual attention area of a scene/task on which a human is focusing using machine learning-based techniques. This is of great importance, particularly in a scenario or task where objects are too close to where the human is paying eye-gaze attention. However, visual attention is usually estimated using an eye tracker, which has been successfully used in various AmI applications, for instance, human–computer interaction in [50], in sports for detecting flaws linked to attentional focus trajectory estimations, in security and law enforcement [51], etc. However, a big hurdle in the AmI environment occurs where users have to purchase and wear eye-tracking devices.
This research work aims to investigate and develop an alternative solution to figure out the user attention area without wearing any sort of eye-tracking devices and with the available devices of our routine life: cameras (almost available everywhere). If we proposed an alternative way, it can be used as an interface between a human and several AmI services to provide further AmI services, with the assumption that machine learning techniques can be used to predict the user’s visual attention exact or closest area with high precision.

4.2. The eTS Fuzzy Model for Visual Attention Area Prediction

The eTS [42] fuzzy model presented previously (see Section 3) has been integrated into a simulated environment (see Section 4.3) to predict the visual attention area (hereafter Visual Attention Area Prediction using Evolving Neuro-Fuzzy Systems (VAAPeNFS)). The reason for shortlisting this model to predict the attention area in a simulated environment is manifold: (i) online, (ii) evolving, and (iii) self-developing in nature. Moreover, as described previously, this model combines linguistic fuzzy If-Then rules; thus, this learns from data and does not require expert knowledge. The rule identification of this model, as described in [46] and in our simulated environment, consists of two stages as follows: (i) decomposing the data frame of the eye-gaze location in the current as well as in the next predicted local image space, and (ii) adapting the fuzzy rule’s parameters in the visual attention problem for the consequent part. These are performed in real-time and for an interval of time shorter than the time of arrival of the next data frame and thus are more appropriate for an AmI environment. This model is highly non-linear, non-stationary and evolving (fuzzy rules can be added/removed according to the data pattern in the input–output image space) and thus is very useful in VAAPeNFS. Furthermore, in VAAPeNFS, at the start the aim is to predict the position of the eye-gaze with the help of mouse coordinates. This is to make sure the user can place the mouse cursor on that particular position of the screen where it looks in a simulated environment (to capture data for prediction for the offline mode), and this can be the (t + 1)th position and can be defined as follows:
y ~ t + 1 * = f ( y t * )
where y ~ t + 1 * is the predicted position of the eye-gaze in the (t + 1)th simulated environment. These neuro-fuzzy models (eTS and DeTS) are represented by linguistically fuzzy rules as:
R u l e j = I F ( x m   i s   χ * ) A N D ( y m   i s   ω * ) T H E N x ~ t + 1 = a 0 + a 1 x m + a 2 y m y ~ t + 1 = b 0 + b 1 x m + b 2 y m
where a and b are the parameters of the (linear) consequent part, and x and y are the horizontal and vertical coordinates of the eye-gaze within the simulated environment.
The above steps (2 to 6) are repeated unless all the data points are read. It is worth mentioning here that only the first step is performed in offline mode and the rest are performed online. The flow chart of the algorithm used [15,42,45] is summarized in Figure 1.

4.3. Simulated Environment for Neuro-Fuzzy Models

In this research work, data for the training/testing of the eTS (see Section 3) neuro-fuzzy model have been gathered from a VAAPeNFS (see Section 4.2). In this simulated environment, which has been designed for the proposed approach (see Figure 2), the user’s task consists of classifying each object as an “ally” or an “enemy” and shooting the enemies while allowing the allies to land. When the user clicks with the mouse on some object, an arithmetic formula appears (as through it we are sending information to provide your identity from the control room), and the correct formula reflects that it is an ally and the wrong one depicts it as an enemy. An enemy will then turn into a red color whereas an ally can have green color. The player uses a cannon at the bottom of the screen to shoot down objects. Each shot to the enemy adds to the score (one point), while on the contrary, the score is subtracted if we shoot an ally. The mouse coordinates are displayed on the left bottom side of the screen, i.e., x ( m ) and y ( m ) . On the bottom right side of the screen, the x ( g ) and y ( g ) coordinates are show. This is obtained using the Haar cascade classifier [39] trained for eye-gaze detection.
It is necessary to identify input variables in the VAAPeNFS. For this research work, the input for the neuro-fuzzy model consists of four variables, x ( g ) , y ( g ) , x ( m ) and y ( m ) at each k t h time instance. These four variables are used by these models to predict the eye-gaze visual attention area at the next ( k + 1 ) t h time instance.

5. Experimental Set-Up

5.1. Data Collection

In the VAAPeNFS (see Section 4), at each time stamp (t), training data are collected continuously and in real-time. These data can bring new information which could indicate a change in the operating conditions or a significant change in the dynamics of the process [32]. Furthermore, these data can be used to make a new rule or modify the existing ones. As mentioned previously, evolving clustering is used in the eTS model, which reflects a gradual change in the rule base; thus, the number of rules will either expand or shrink. It is worth mentioning here that the first piece of data is a considered as a focal point of the newly constructed cluster. To obtain high-quality data from the simulated environment, the training samples should cover all the possible combinations and ranges of the input variation from the simulated environment. Figure 3 shows the eye-gaze datapoint/coordinates of the simulated environment along the x- and y-axes of the camera and the mouse position along x- and y-axes using 700 s of data. Furthermore, the total data instances that are recorded are 40 K and contain four values for each instance, xg, yg, xm, and ym.
Before the actual training of the model takes place, it is necessary to determine which variables of the environment should be chosen as the components of the input vector. This is to ensure that the models trained using these samples can accurately represent the visual eye-gaze coordinates to be simulated in the environment. Figure 3 shows data of the eye-gaze along the x- and y-axes and the mouse position along the x- and y-axes using 700 s of data.

5.2. Evaluation Measures

For this research work, we have applied four commonly used evaluation measures for prediction tasks, which are as follows: Root Mean Square Error (RMSE), Mean Square Error (MSE), the Non-Dimensional Error Index (NDEI) [11,52] and Coefficient of Determination (R2). The MSE can be defined as:
M S E = 1 N q = 1 Q ( y q y ~ q ) 2
where N is the number of data points, y ~ q are the predicted values, and y q is the original value. The RMSE is another standard measure and can be defined as the square root of the MSE as follows:
R M S E = 1 N q = 1 Q ( y q y ~ q ) 2
The NDEI is used as a ratio of the root mean square error and standard deviation (sd) of the target data y ( t ) as:
N D E I = R M S E s d ( y ( t ) )
R2 is called the coefficient of determination and represents how well the estimated predicted value fits the actual value. The higher the value of R2, the greater the similarity to the observed values:
R 2 = i = 1 N ( x i x - ) ( y i y - ) i = 1 N ( x i x - ) 2 i = 1 N ( y i y - ) 2
where y i is the estimated value, x i is the observed value, y - is the average of the estimated values, and x - is the average of the observed values. The correlation coefficient is a commonly used measure, which provides information on the strength of the linear relationship between the observed and estimated values.

5.3. Approaches

For this study, we have applied four different approaches. The VAAPeNFS is the eTS+ model, DeTS refers to the dynamically evolving Takagi–Sugeno neuro-fuzzy model, ANFIS refers to an Adaptive Neuro-Fuzzy Inference System [26], and the recursive least squares (SLR) [53]. The reason for shortlisting the ANFIS for comparison with our newly developed technique is its code availability in Matlab, one of the most commonly used neuro-fuzzy models for the offline approach and one that has been used in a similar type of study in the recent past [9]. The ANFIS works in offline mode; therefore, the dataset is divided into training and testing. However, the eTS and DeTS models do not need separation for training and testing data, although we did this to put these models on the same footing. The fuzzy model has been applied on the collected data in different ways (online as well as offline) for comparison purposes. A Matlab implementation of the eTS, Adaptive Neuro-Fuzzy Inference System (ANFIS), and DeTS neuro-fuzzy models with its default parameter settings was used, accept the Ω in step 1 has been set to 500. The following parameter were set for the recursive least squares (RLS): Ω = 500 , the decay factor = 0.98 , ρ = 0.5 , ζ = 0.01 , ε = e 1 and β = 1.001 . These are the only parameters of the algorithm that need to be defined. All the experiments were performed using a desktop system, and memory requirements are not included as a performance indicator. However, it has been observed that the algorithms did not put high stress on the systems. Furthermore, all the experiments were evaluated using standard evaluation measures (see Section 5.3).

6. Results and Analysis

Table 1, Table 2 and Table 3 show the results for the various neuro-fuzzy models: VAAPeNFS, DeTS, ANFIS, and SLR (see Section 5.3). These models have been evaluated using various evaluation measures: Root Mean Square Error (RMSE), Coefficient of Determination (R2), and NDEI (see Section 5.2). Evolving and offline are the model structure, whereas the testing time is also shown for predicting the eye-gaze coordinates. As the ANFIS works in offline mode, the dataset is divided into training and testing. However, the eTS and DeTS models do not need separation for training and testing data, although we did this to put these models on the same footing. The training data contain 80% instances, whereas the testing data are 20% of the total data, which is 40 K. Overall, the best results are presented in bold, whereas the category results are represented using italic values.
The results from Table 1 (showing the predicted and desired values) are as expected. Overall, the best results for the eye-gaze visual attention have been achieved using the neuro-fuzzy (VAAPeNFS) and DeTS models, where the RMSE is 0.1791 and 0.1933 for the DeTS and VAAPeNFS, respectively, whereas the R2 score is 0.8855 for the DeTS and 0.8407 for the VAAPeNFS. These latter results are almost comparable. The lowest result is for the SLR, with an RMSE of 0.2186 and R2 of 0.4913. The ANFIS shows an RMSE of 0.2480 and an R2 of 0.7894. This is due to the fact that the online methods capture the dynamics of the data and uses the cluster weights as well as distance before generating any new cluster. Furthermore, the cluster weight is defined in terms of the data in time and space so that it exponentially decays with time. It can be further observed from Table 1 that the online methods (VAAPeNFS and DeTS) are suitable for such a type of task, whereas the RLS is less appropriate. It can be further observed that estimation times of the VAAPeNFS (eTS) and DeTS are much less when compared to the ANFIS and SLR, which reflects that they are more appropriate candidates for real-time data streams. Moreover, the prediction error for the online methods on the test data is quite low in comparison to the other fuzzy and the statistical models/methods, which reflects strong co-relation in the data. The number of rules generated by the VAAPeNFS (eTS) is 16 as compared to the ANFIS, which generated 169 rules (using 13 membership functions for each input variable), and the DeTS, which generated just 9 rules.
Figure 4 shows the predicted and desired values for the VAAPeNFS. The eTS fuzzy model integrated into the VAAPeNFS has evolved to R = 16 rules. It has been further observed from the various experiments that the accuracy of the modelling and the compactness of the model are superior when compared with the other existing approaches (offline). Furthermore, the difference can be seen on a finer scale in Figure 5, which shows the prediction error is very low and acceptable for such a type of task, considering the difficulty level of the task.
Table 2 shows a more detailed results analysis for the VAAPeNFS (eTS) and DeTS by using some other evaluation measures and on 2100 test data instances. The DeTS has produced the following results: NDEI = 0.1980 in 0.0037 s and with four rules. However, the NDEI value is high for the VAAPeNFS (0.2492), although more rules (eight) and processing time are consumed by the VAAPeNFS (0.0068 s). The simulated results also show similar types of results (see Figure 6), where the DeTS predicts values closer to the real values (3500 test instances were used to record this). Figure 7 also shows the predicted vs. real values using the VAAPeNFS (eTS) approach; however, as compared to the DeTS, its results are less accurate. This shows that such types of methods/techniques are more appropriate in an AmI environment when there are many objects closer to each other in the visual display and the prediction models have to identify one closer instance on the screen with a high accuracy.
In the next comparison, the eTS model’s “evolving” characteristics have been compared on a sub-part of the dataset (1200 instances) that has been described previously. In the ANFIS, whenever the environment changes, we need to train the system completely. However, in the eTS model, the same rules are used with some slight modification in the rule base as it is evolving. Figure 8a shows the comparison results where the ANFIS is adopted after the training of the system (MSE is used as the evaluation measure) and Figure 8b depicts the change in the environment meaning we need to train the model completely. Otherwise, it will generate a high error. Figure 9a shows the results of the eTS system after it is trained and after a change in the environment. It can be seen clearly that the eTS model (VAAPeNFS) produced less errors once the environment was totally changed (see Figure 9b). The MSE value is much lower as compared to the ANFIS (2 inputs and 13 membership functions). The same types of simulated results in Figure 10a,b are recorded for the DeTS neuro-fuzzy model.
A more detailed results analysis of the evolving nature is reported in Table 3, which shows that the lowest value of the MSE is 0.0134, RMSE is 0.1156, R2 is 0.8094, and the number of rules is five for the DeTS (before the change in the environment). When the environment is changed, it shows results of 0.0201 (MSE), 0.1418 (RMSE), and 0.7487 (R2), and the number of rules is extended up to 11. The eTS model reports results before the change in the environment (MSE = 0.0212, RMSE = 0.1456, R2 = 0.8187 and rules = 10) and after the change in the environment (MSE = 0.0282, RMSE = 0.1679, R2 = 0.7139 and rules = 18). Finally, as expected, the ANFIS performance is not satisfactory after the change in environment (MSE: 0.1635, RMSE: 0.4044, R2: 0.4344 and rules: 39), although it is satisfactory for the trained environment (MSE = 0.0397, RMSE = 0.1992, R2 = 0.8094 and rules = 25). This also highlights the fact that evolving models (eTS and DeTS) can be successfully deployed to predict the eye-gaze attention area and adapt new rules according to the change in environment and thus fulfil current situation or needs.
One other important characteristic of neuro-fuzzy models (eTS and DeTS) is that they learn from scratch. The results of the online modelling using the global identification criteria are shown in Figure 11.
The same datasets (this time on 700 instances) were used to compare the eTS, DeTS (along with local and global learning) and ANFIS models. In the ANFIS, 256 rules were created for 4 inputs with 450 parameters, which are shown in Figure 12. It has been observed again that the DeTS generated the least number of rules on average due to its more advanced method of keeping control over the size of the rule base through the mechanism of disabling some of the obsolete rules. The eTS model evolved with 8 rules and 8 linear sub-models and with 32 parameters of the consequent part. The center of the membership function (MF) describes the fuzzy sets for the consequent part of the rules, which are tabulated in Table 4, and the rule evolution of one of the data streams is shown in Figure 13 with an MSE of 0.01559, an NDEI of 0.47138 and an R2 of 0.92603. It has been observed that the calculations took a fraction of a second for each new data stream. However, This is directly linked to the interpretability and simplicity. The only difference between them is the way the parameters of the resulting linear equations are optimized, which should not have an influence on the cluster generation process. In the ANFIS, the numbers of rules depend on the MF only. One of the disadvantages of the ANFIS is the need to take expert opinion to define the MF and variables in advance, whereas in the eTS/DeTS models, they are determined automatically.
Using the eTS approach demonstrated that it was possible to build a fuzzy rule-based model online from the data of mouse coordinates, which then were used successfully to predict the visual attention area through the web camera This model made gradual changes to its structure and parameters for another environment.
The quality of the rule base is constantly monitored using support (numbers of data points that are within a radius distance from the focal point) and age (accumulated relative time tag) [38]. In Figure 14, it can be seen that the first rule/cluster is the most populated one and has the highest support, and the seventh rule/cluster has the lowest support (on 350 test data instances). Figure 15 shows the ages of the various clusters.
Finally, the last part of our experiment highlighted the number of online rule adaptions by the neuro-fuzzy models, that is, the eTS (VAAPeNFS) and DeTS models. These models autonomously either generate new rules or render obsolete previously ones to capture the dynamics of the data streams. Figure 16 shows the number of rules that were adapted online. It can be seen that the DeTS required significantly less rules with comparison to the eTS model. This reduction in new rules in the DeTS is because of its rule base, which has a better mechanism to cover dynamicity the changing environment. Both models have learned rules with the first data instance available from the data stream and modified or built new rules as the new data instances become available with the passage of time.

7. Conclusions

In this paper, two evolving neuro-fuzzy models, the eTS and DeTS models, have been used for predicting the eye-gaze visual attention area and named as the VAAPeNFS. It does not require the re-training of the whole model. It is computationally efficient, as it is based on the recursive, non-iterative building of the rule base by unsupervised learning. Several standard fuzzy and recent evolving algorithms and some widely accepted statistical methods were compared and applied to the eye-gaze visual attention area prediction problem. This approach can be used effectively in predicting, and it performed well on the tested data. Results were obtained by applying the method to a simulated environment. The visual attention area data indicate that the proposed method performed better than the other methods. Furthermore, this approach generated less rules and results. Finally, a detailed results analysis was presented using benchmark evaluation measures. This showed how the proposed approach evolved in terms of the model structure, which made it suitable for a real-time AmI environment.
Finally, future work could involve applying the neuro-fuzzy models to real-world applications, such as developing assistive technologies for people with visual impairments or designing more effective visual displays for complex information. This could help to demonstrate the practical value of the models and their potential impacts on society.

Author Contributions

Conceptualization, R.N.J., A.N. and J.S.; Methodology, A.N.; Software, A.N., J.S. and M.U.K.; Validation, M.U.K. and S.S.; Investigation, R.N.J.; Data curation, S.S.; Writing — review & editing, G.A.; Supervision, R.N.J.; Project administration, M.E.; Funding acquisition, M.E.All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by EIAS (EIAS-2023) Data Science Lab, Prince Sultan University, KSA.

Data Availability Statement

No data were used to support this study. We have conducted simulations to evaluate the performance of the proposed protocol. However, any query about the research conducted in this paper is highly appreciated and can be asked of the principal authors (Jawad Shafi and Rab Nawaz Jadoon—[email protected] and [email protected]).

Acknowledgments

The authors would like to thank the EIAS Data Science Lab and Prince Sultan University for their encouragement, support and the facilitation of the resources needed to complete the project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Acampora, G.; Loia, V. A proposal of ubiquitous fuzzy computing for ambient intelligence. Inf. Sci. 2008, 178, 631–646. [Google Scholar] [CrossRef]
  2. Bosse, T.; Hoogendoorn, M.; Klein, M.C.; Treur, J. A component-based ambient agent model for assessment of driving behaviour. In Proceedings of the Ubiquitous Intelligence and Computing: 5th International Conference, UIC 2008, Oslo, Norway, 23–25 June 2008; pp. 229–243. [Google Scholar]
  3. Bosse, T.; Memon, Z.A.; Oorburg, R.; Treur, J.; Umair, M.; De Vos, M. A software environment for an adaptive human-aware software agent supporting attention-demanding tasks. Int. J. Artif. Intell. Tools 2011, 20, 819–846. [Google Scholar] [CrossRef]
  4. Susskind, J.; Littlewort, G.; Bartlett, M.; Movellan, J.; Anderson, A. Human and computer recognition of facial expressions of emotion. Neuropsychologia 2007, 45, 152–162. [Google Scholar] [CrossRef]
  5. Peter, C.; Beale, R. Affect and Emotion in Human-Computer Interaction: From Theory to Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008; Volume 4868. [Google Scholar]
  6. De Silva, P.R.; Osano, M.; Marasinghe, A.; Madurapperuma, A.P. Towards recognizing emotion with affective dimensions through body gestures. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Washington, DC, USA, 10–12 April 2006; pp. 269–274. [Google Scholar]
  7. Henricksen, K.; Indulska, J. Developing context-aware pervasive computing applications: Models and approach. Pervasive Mob. Comput. 2006, 2, 37–64. [Google Scholar] [CrossRef]
  8. Acampora, G.; Vitiello, A. Interoperable neuro-fuzzy services for emotion-aware ambient intelligence. Neurocomputing 2013, 122, 3–12. [Google Scholar] [CrossRef]
  9. Shafi, J.; Angelov, P.; Umair, M. Prediction of the Attention Area in Ambient Intelligence Tasks. Innov. Issues Intell. Syst. 2016, 33–56. [Google Scholar]
  10. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef]
  11. Angelov, P.P.; Filev, D.P. An approach to online identification of Takagi-Sugeno fuzzy models. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2004, 34, 484–498. [Google Scholar] [CrossRef]
  12. Angelov, P. An approach for fuzzy rule-base adaptation using on-line clustering. Int. J. Approx. Reason. 2004, 35, 275–289. [Google Scholar] [CrossRef]
  13. Angelov, P.; Filev, D.P.; Kasabov, N. Evolving Intelligent Systems: Methodology and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  14. Angelov, P.; Buswell, R. Identification of evolving fuzzy rule-based models. IEEE Trans. Fuzzy Syst. 2002, 10, 667–677. [Google Scholar] [CrossRef]
  15. Angelov, P. Autonomous Learning Systems: From Data Streams to Knowledge in Real-Time; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  16. Angelov, P.; Buswell, R. Evolving rule-based models: A tool for intelligent adaptation. In Proceedings of the Proceedings Joint 9th IFSA World Congress and 20th NAFIPS International Conference (Cat. No. 01TH8569), Vancouver, BC, Canada, 25–28 July 2001; pp. 1062–1067. [Google Scholar]
  17. Dutta Baruah, R.; Angelov, P. DEC: Dynamically evolving clustering autonomous and its application to structure. IEEE Trans. Cybern. 2014, 44, 1619–1631. [Google Scholar] [CrossRef]
  18. Cao, F.; Estert, M.; Qian, W.; Zhou, A. Density-based clustering over an evolving data stream with noise. In Proceedings of the 2006 SIAM International Conference on Data Mining, Bethesda, MD, USA, 20–22 April 2006; pp. 328–339. [Google Scholar]
  19. AbuHassan, A.; Alshayeb, M.; Ghouti, L. Detection of design smells using adaptive neuro-fuzzy approaches. Int. J. Fuzzy Syst. 2022, 24, 1927–1943. [Google Scholar] [CrossRef]
  20. Amara, K.; Malek, A.; Bakir, T.; Fekik, A.; Azar, A.T.; Almustafa, K.M.; Bourennane, E.-B.; Hocine, D. Adaptive neuro-fuzzy inference system based maximum power point tracking for stand-alone photovoltaic system. Int. J. Model. Identif. Control 2019, 33, 311–321. [Google Scholar] [CrossRef]
  21. Kanade, P.; David, F.; Kanade, S. Convolutional neural networks (CNN) based eye-gaze tracking system using machine learning algorithm. Eur. J. Electr. Eng. Comput. Sci. 2021, 5, 36–40. [Google Scholar] [CrossRef]
  22. Bâce, M.; Staal, S.; Bulling, A. How far are we from quantifying visual attention in mobile HCI? IEEE Pervasive Comput. 2020, 19, 46–55. [Google Scholar] [CrossRef]
  23. Modi, N.; Singh, J. Real-time camera-based eye gaze tracking using convolutional neural network: A case study on social media website. Virtual Real. 2022, 26, 1489–1506. [Google Scholar] [CrossRef]
  24. Esqueda-Elizondo, J.J.; Juárez-Ramírez, R.; López-Bonilla, O.R.; García-Guerrero, E.E.; Galindo-Aldana, G.M.; Jiménez-Beristáin, L.; Serrano-Trujillo, A.; Tlelo-Cuautle, E.; Inzunza-González, E. Attention measurement of an autism spectrum disorder user using EEG signals: A case study. Math. Comput. Appl. 2022, 27, 21. [Google Scholar] [CrossRef]
  25. Mohanty, R.; Pani, S.K.; Azar, A.T. Recognition of Livestock Disease Using Adaptive Neuro-Fuzzy Inference System. Int. J. Sociotechnol. Knowl. Dev. (IJSKD) 2021, 13, 101–118. [Google Scholar] [CrossRef]
  26. Jang, J.-S. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  27. Doctor, F.; Hagras, H.; Callaghan, V. A fuzzy embedded agent-based approach for realizing ambient intelligence in intelligent inhabited environments. IEEE Trans. Syst. Man Cybern.-Part A Syst. Humans 2004, 35, 55–65. [Google Scholar] [CrossRef]
  28. El-Desouky, B.; Hagras, H. An Adaptive Type-2 Fuzzy Logic Based Agent for Multi-Occupant Ambient Intelligent Environments. In Proceedings of the Intelligent Environments, Barcelona, Spain, 15 July 2009; pp. 257–266. [Google Scholar]
  29. Cook, D.J.; Schmitter-Edgecombe, M. Assessing the quality of activities in a smart environment. Methods Inf. Med. 2009, 48, 480–485. [Google Scholar] [PubMed]
  30. Medjahed, H.; Istrate, D.; Boudy, J.; Baldinger, J.-L.; Dorizzi, B. A pervasive multi-sensor data fusion for smart home healthcare monitoring. In Proceedings of the 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), Taipei, Taiwan, 27–30 June 2011; pp. 1466–1473. [Google Scholar]
  31. Ryoo, M.S.; Aggarwal, J.K. Hierarchical recognition of human activities interacting with objects. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, Minnesota, 17–22 June 2007; pp. 1–8. [Google Scholar]
  32. Macias, J.J.; Angelov, P.; Zhou, X. A method for predicting quality of the crude oil distillation. In Proceedings of the 2006 International Symposium on Evolving Fuzzy Systems, Ambelside, UK, 7–9 September 2006; pp. 214–220. [Google Scholar]
  33. Du, H.; Zhang, N. Application of evolving Takagi–Sugeno fuzzy model to nonlinear system identification. Appl. Soft Comput. 2008, 8, 676–686. [Google Scholar] [CrossRef]
  34. Du, H.; Zhang, N. Evolutionary takagi-sugeno fuzzy modelling for mr damper. In Proceedings of the 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS’06), Rio De Janeiro, Brazil, 13–15 December 2006; p. 69. [Google Scholar]
  35. Angelov, P.; Ramezani, R.; Zhou, X. Autonomous novelty detection and object tracking in video streams using evolving clustering and Takagi-Sugeno type neuro-fuzzy system. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1456–1463. [Google Scholar]
  36. Angelov, P.; Sadeghi-Tehran, P.; Ramezani, R. An approach to automatic real-time novelty detection, object identification, and tracking in video streams based on recursive density estimation and evolving Takagi–Sugeno fuzzy systems. Int. J. Intell. Syst. 2011, 26, 189–205. [Google Scholar] [CrossRef]
  37. Precup, R.-E.; Filip, H.-I.; Rădac, M.-B.; Pozna, C.; Dragoş, C.-A.; Preitl, S. Experimental results of evolving Takagi—Sugeno fuzzy models for a nonlinear benchmark. In Proceedings of the 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), Kosice, Slovakia, 2–5 December 2012; pp. 567–572. [Google Scholar]
  38. Salahshoor, K.; Hajisalehi, M.H.; Sefat, M.H. Online identification of evolved Takagi Sugeno fuzzy model for CO 2 sequestration process. In Proceedings of the 2nd International Conference on Control, Instrumentation and Automation, Shiraz, Iran, 27–29 December 2011; pp. 1102–1107. [Google Scholar]
  39. Borhan, M.S.; Karim, S. Online multivariable identification of a mimo distillation column using evolving takagi-sugeno fuzzy model. In Proceedings of the 2007 Chinese Control Conference, Zhangjiajie, China, 26–31 July 2007; pp. 328–332. [Google Scholar]
  40. Filev, D.; Lu, J.; Tseng, F.; Prakah-Asante, K. Real-time driver characterization during car following using stochastic evolving models. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK, USA, 9–12 October 2011; pp. 1031–1036. [Google Scholar]
  41. Birek, L.; Petrovic, D.; Boylan, J. Water leakage forecasting: The application of a modified fuzzy evolving algorithm. Appl. Soft Comput. 2014, 14, 305–315. [Google Scholar] [CrossRef]
  42. Angelov, P.P. Evolving Rule-Based Models: A Tool for Design of Flexible Adaptive Systems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002; Volume 92. [Google Scholar]
  43. Angelov, P.P.; Filev, D.P. Flexible models with evolving structure. Int. J. Intell. Syst. 2004, 19, 327–340. [Google Scholar] [CrossRef]
  44. Takagi, T.; Sugeno, M. Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. Syst. Man Cybern. 1985, SMC-15, 116–132. [Google Scholar] [CrossRef]
  45. Angelov, P. Evolving Takagi-Sugeno Fuzzy Systems from Streaming Data (eTS+). Evol. Intell. Syst. Methodol. Appl. 2010, 21–50. [Google Scholar]
  46. Chiu, S.L. Fuzzy model identification based on cluster estimation. J. Intell. Fuzzy Syst. 1994, 2, 267–278. [Google Scholar] [CrossRef]
  47. Lieberman, A.M.; Hatrak, M.; Mayberry, R.I. The development of eye gaze control for linguistic input in deaf children. In Proceedings of the 35th annual Boston University Conference on Language Development, Boston, MA, USA, 5–7 November 2010; pp. 391–404. [Google Scholar]
  48. Doshi, A.; Trivedi, M.M. Head and gaze dynamics in visual attention and context learning. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Miami Beach, FL, USA, 20–25 June 2009; pp. 77–84. [Google Scholar]
  49. Doshi, A.; Trivedi, M.M. On the roles of eye gaze and head dynamics in predicting driver’s intent to change lanes. IEEE Trans. Intell. Transp. Syst. 2009, 10, 453–462. [Google Scholar] [CrossRef]
  50. Fujii, K.; Salerno, A.; Sriskandarajah, K.; Kwok, K.-W.; Shetty, K.; Yang, G.-Z. Gaze contingent cartesian control of a robotic arm for laparoscopic surgery. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3582–3589. [Google Scholar]
  51. Forget, A.; Chiasson, S.; Biddle, R. Input precision for gaze-based graphical passwords. In CHI’10 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2010; pp. 4279–4284. [Google Scholar]
  52. Kim, J.; Kasabov, N. HyFIS: Adaptive neuro-fuzzy inference systems and their application to nonlinear dynamical systems. Neural Netw. 1999, 12, 1301–1319. [Google Scholar] [CrossRef]
  53. Szepesvari, C. Algorithms for reinforcement learning: Synthesis lectures on artificial intelligence and machine learning. Morgan Claypool 2010, 4, 1–103. [Google Scholar]
Figure 1. Flow chart of the proposed approach.
Figure 1. Flow chart of the proposed approach.
Electronics 12 02243 g001
Figure 2. Simulated environment for the proposed approach. (The purple points are unknown values, when clicked a mathematical equation is generated. The correct formula reflects that it is an ally and the wrong one depicts it as an enemy. Enemy will then turn into red color whereas, ally can have green/yellow color.)
Figure 2. Simulated environment for the proposed approach. (The purple points are unknown values, when clicked a mathematical equation is generated. The correct formula reflects that it is an ally and the wrong one depicts it as an enemy. Enemy will then turn into red color whereas, ally can have green/yellow color.)
Electronics 12 02243 g002
Figure 3. Eye-gaze data coordinates from the simulated environment.
Figure 3. Eye-gaze data coordinates from the simulated environment.
Electronics 12 02243 g003
Figure 4. Modelling performance of estimated vs. real values of the VAAPeNFS.
Figure 4. Modelling performance of estimated vs. real values of the VAAPeNFS.
Electronics 12 02243 g004
Figure 5. Prediction error in the VAAPeNFS.
Figure 5. Prediction error in the VAAPeNFS.
Electronics 12 02243 g005
Figure 6. Comparison between predicted and real values using the DeTS.
Figure 6. Comparison between predicted and real values using the DeTS.
Electronics 12 02243 g006
Figure 7. Comparison between predicted and real data instances using the eTS.
Figure 7. Comparison between predicted and real data instances using the eTS.
Electronics 12 02243 g007
Figure 8. (a) Prediction error of the ANFIS before the change in environment. (b) Prediction error of the ANFIS after the change in environment. (Red shows the eye gaze coordinates while blue shows the ANFIS algorithm prediction).
Figure 8. (a) Prediction error of the ANFIS before the change in environment. (b) Prediction error of the ANFIS after the change in environment. (Red shows the eye gaze coordinates while blue shows the ANFIS algorithm prediction).
Electronics 12 02243 g008
Figure 9. (a) Prediction error of the eTS model before the change in environment. (b) Prediction error of the eTS model after the change in environment.
Figure 9. (a) Prediction error of the eTS model before the change in environment. (b) Prediction error of the eTS model after the change in environment.
Electronics 12 02243 g009aElectronics 12 02243 g009b
Figure 10. (a) Prediction error of the DeTS before the change in environment. (b) Prediction error of the DeTS after the change in environment.
Figure 10. (a) Prediction error of the DeTS before the change in environment. (b) Prediction error of the DeTS after the change in environment.
Electronics 12 02243 g010
Figure 11. Prediction eye-gaze coordinates using global parameter estimation on a test dataset.
Figure 11. Prediction eye-gaze coordinates using global parameter estimation on a test dataset.
Electronics 12 02243 g011
Figure 12. ANFIS structure with 4 inputs and 256 rules.
Figure 12. ANFIS structure with 4 inputs and 256 rules.
Electronics 12 02243 g012
Figure 13. Rule evolution process of the eTS model for one of the datasets. Eye-gaze data (dots); eTS model (solid line); data samples that originate a new rule (circles); data samples that replace the existing centers (triangles); final position of the focal point (asterisk-*).
Figure 13. Rule evolution process of the eTS model for one of the datasets. Eye-gaze data (dots); eTS model (solid line); data samples that originate a new rule (circles); data samples that replace the existing centers (triangles); final position of the focal point (asterisk-*).
Electronics 12 02243 g013
Figure 14. Population of the rules/clusters.
Figure 14. Population of the rules/clusters.
Electronics 12 02243 g014
Figure 15. Age of the rules/clusters.
Figure 15. Age of the rules/clusters.
Electronics 12 02243 g015
Figure 16. Number of online rule adaptations.
Figure 16. Number of online rule adaptations.
Electronics 12 02243 g016
Table 1. Evaluation results for various approaches on test data.
Table 1. Evaluation results for various approaches on test data.
MethodVAAPeNFS (eTS)DeTSANFISSLR
TypeEvolvingOff-Line
RMSE0.19330.17920.24800.2186
R20.84070.87550.78940.4913
Rules169169--
Testing Time (ms)0.270.19203.50477.2
Table 2. Evaluation results for VAAPeNFS (eTS) and DeTS on test data. (# shows the number of rules).
Table 2. Evaluation results for VAAPeNFS (eTS) and DeTS on test data. (# shows the number of rules).
MethodVAAPeNFS (eTS)DeTS
TypeEvolving
NDEI0.24920.1980
Time(s)0.00680.0037
# Rules84
Table 3. Evaluation results of eTS, DeTS and ANFIS before and after changing the environment.
Table 3. Evaluation results of eTS, DeTS and ANFIS before and after changing the environment.
Before Change of Environment
MethodTypeMSERMSER2Rules
ANFISOffline0.03970.19920.809425
eTSEvolving0.02120.14560.818710
DeTSEvolving0.01340.11560.83735
After Change of Environment
ANFISOffline0.16350.40440.434439
eTSEvolving0.02820.16790.713918
DeTSEvolving0.02010.14180.748711
Table 4. eTS-generated fuzzy sets for the antecedent.
Table 4. eTS-generated fuzzy sets for the antecedent.
xgygxmym
c 1 * 0.47300.37760.92480.7530
c 2 * 0.46850.44060.61910.4970
c 3 * 0.25680.37060.25880.4545
c 4 * 0.99550.52451.00000.4742
c 5 * 1.00000.53851.00000.4742
c 6 * 0.44140.56640.37400.4803
c 7 * 0.33330.46150.45310.5197
c 8 * 0.03150.76920.00100.9470
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jadoon, R.N.; Nadeem, A.; Shafi, J.; Khan, M.U.; ELAffendi, M.; Shah, S.; Ali, G. A Method for Predicting the Visual Attention Area in Real-Time Using Evolving Neuro-Fuzzy Models. Electronics 2023, 12, 2243. https://doi.org/10.3390/electronics12102243

AMA Style

Jadoon RN, Nadeem A, Shafi J, Khan MU, ELAffendi M, Shah S, Ali G. A Method for Predicting the Visual Attention Area in Real-Time Using Evolving Neuro-Fuzzy Models. Electronics. 2023; 12(10):2243. https://doi.org/10.3390/electronics12102243

Chicago/Turabian Style

Jadoon, Rab Nawaz, Aqsa Nadeem, Jawad Shafi, Muhammad Usman Khan, Mohammed ELAffendi, Sajid Shah, and Gauhar Ali. 2023. "A Method for Predicting the Visual Attention Area in Real-Time Using Evolving Neuro-Fuzzy Models" Electronics 12, no. 10: 2243. https://doi.org/10.3390/electronics12102243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop