Next Article in Journal
On Graphs with c2-c3 Successive Minimal Laplacian Coefficients
Previous Article in Journal
Automatic Classification of Coronary Stenosis Using Feature Selection and a Hybrid Evolutionary Algorithm
Previous Article in Special Issue
Sketch-Based Retrieval Approach Using Artificial Intelligence Algorithms for Deep Vision Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distance and Similarity Measures of Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Matrices and Their Applications to Data Classification in Supervised Learning

1
Department of Computer Engineering, Faculty of Engineering and Natural Sciences, İstanbul Rumeli University, İstanbul 34570, Türkiye
2
Department of Mathematics, Faculty of Science, Çanakkale Onsekiz Mart University, Çanakkale 17020, Türkiye
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(5), 463; https://doi.org/10.3390/axioms12050463
Submission received: 27 March 2023 / Revised: 5 May 2023 / Accepted: 8 May 2023 / Published: 10 May 2023
(This article belongs to the Special Issue Machine Learning: Theory, Algorithms and Applications)

Abstract

:
Intuitionistic fuzzy parameterized intuitionistic fuzzy soft matrices (ifpifs-matrices), proposed by Enginoğlu and Arslan in 2020, are worth utilizing in data classification in supervised learning due to coming into prominence with their ability to model decision-making problems. This study aims to define the concepts metrics, quasi-, semi-, and pseudo-metrics and similarities, quasi-, semi-, and pseudo-similarities over ifpifs-matrices; develop a new classifier by using them; and apply it to data classification. To this end, it develops a new classifier, i.e., Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Classifier (IFPIFSC), based on six pseudo-similarities proposed herein. Moreover, this study performs IFPIFSC’s simulations using 20 datasets provided in the UCI Machine Learning Repository and obtains its performance results via five performance metrics, accuracy (Acc), precision (Pre), recall (Rec), macro F-score (MacF), and micro F-score (MicF). It also compares the aforementioned results with those of 10 well-known fuzzy-based classifiers and 5 non-fuzzy-based classifiers. As a result, the mean Acc, Pre, Rec, MacF, and MicF results of IFPIFSC, in comparison with fuzzy-based classifiers, are 94.45%, 88.21%, 86.11%, 87.98%, and 89.62%, the best scores, respectively, and with non-fuzzy-based classifiers, are 94.34%, 88.02%, 85.86%, 87.65%, and 89.44%, the best scores, respectively. Later, this study conducts the statistical evaluations of the performance results using a non-parametric test (Friedman) and a post hoc test (Nemenyi). The critical diagrams of the Nemenyi test manifest the performance differences between the average rankings of IFPIFSC and 10 of the 15 are greater than the critical distance (4.0798). Consequently, IFPIFSC is a convenient method for data classification. Finally, to present opportunities for further research, this study discusses the applications of ifpifs-matrices for machine learning and how to improve IFPIFSC.

1. Introduction

Fuzzy sets [1,2] are a mathematical tool put forward by Zadeh to overcome the problems involving uncertainties in which classical sets are insufficient in modeling. Another tool offered to model problems involving uncertainties is soft sets [3,4,5]. Thus far, several hybrid versions of these two concepts have been defined, such as fuzzy soft sets [6] and fuzzy parameterized fuzzy soft sets (fpfs-sets) [7]. Recently, fpfs-sets have come to the fore due to their ability to model situations where both parameters and alternatives (objects) have fuzzy values. Afterward, fuzzy parameterized fuzzy soft matrices (fpfs-matrices) [8] have been defined to benefit from the modeling capabilities of fpfs-sets and avoid their disadvantages in decision-making problems containing a large amount of data.
Latterly, Memiş et al. [9] proposed a classifier, named Fuzzy Parameterized Fuzzy Soft Normalized Hamming Classifier (FPFSNHC), by defining normalized Hamming pseudo-similarity over fpfs-matrices and successfully applied it to classify some known datasets, such as “Breast Cancer Wisconsin (Diagnostic)”, “Immunotherapy”, “Pima Indian Diabetes”, and “Statlog Heart”. In addition, Memiş and Enginoğlu [10] have developed Fuzzy Parameterized Fuzzy Soft Chebyshev Classifier (FPFSCC) by defining Chebyshev pseudo-similarity over fpfs-matrices and successfully applied it to a classification problem containing medical datasets, such as “Cryotherapy”, “Diabetic Retinopathy”, “Hepatitis”, and “Immunotherapy”. Furthermore, Memiş et al. [11] have suggested a classifier using Euclidean pseudo-similarity over fpfs-matrices, namely Fuzzy Parameterized Fuzzy Soft Euclidean Classifier (FPFS-EC), and successfully applied it to a numerical data classification problem involving the datasets “Breast Tissue” and “Parkinson’s Disease”. Moreover, Memiş et al. [12,13] have propounded Fuzzy Parameterized Fuzzy Soft Aggregation Classifier (FPFS-AC) and Comparison Matrix-Based Fuzzy Parameterized Fuzzy Soft Classifier (FPFS-CMC) utilizing soft decision-making (SDM) methods. Thus, they have given a point of view of classifier constructions. In addition, Memiş et al. [14] have introduced a classifier named Fuzzy Parameterized Fuzzy Soft k-Nearest Neighbor (FPFS-kNN) and compared it with the kNN-based classifiers. The authors have used Pearson, Spearman, and Kendall correlation coefficients in the construction of FPFS-kNN, and these three constructions have been denoted by FPFS-kNN(P), FPFS-kNN(S), and FPFS-kNN(K), respectively.
Despite these successes of fpfs-matrices, they cannot model intuitionistic fuzzy uncertainties [15,16]. Therefore, intuitionistic fuzzy soft sets (ifs-sets) [17], intuitionistic fuzzy parameterized soft sets (ifps-sets) [18], and intuitionistic fuzzy parameterized fuzzy soft sets (ifpfs-sets) [19] have been studied. Later, the concept intuitionistic fuzzy parameterized intuitionistic fuzzy soft sets (ifpifs-sets) [20], which can model situations where both parameters and objects with intuitionistic fuzzy values, has been defined and successfully applied to an SDM problem. Thereafter, intuitionistic fuzzy parameterized intuitionistic fuzzy soft matrices (ifpifs-matrices) [21] has been proposed and successfully applied to two SDM problems.
This paper focuses on developing a new classifier in data classification in supervised learning by operationalizing ifpifs-matrices and making theoretical contributions to them. The major contributions of the present study can be summarized as follows:
Defining the concepts metrics, quasi-, semi-, and pseudo-metrics and similarities, quasi-, semi-, and pseudo-similarities over ifpifs-matrices.
Proposing five pseudo-metrics and seven pseudo-similarities.
Developing a new classifier, i.e., Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Classifier (IFPIFSC), with the best scores.
Applying IFPIFSC to real-life classification problems successfully.
In the second part of this study, some basic definitions are required for the following sections are provided. Section 3 defines the metric, quasi-, semi-, and pseudo-metric over the ifpifs-matrices space and proposes five pseudo-metrics. In addition, it defines the concepts similarity, quasi-, semi-, and pseudo-similarity over the ifpifs-matrices space and suggests seven pseudo-similarities. Furthermore, this section clarifies some basic properties of the proposed five pseudo-metrics and seven pseudo-similarities. Section 4 proposes a classifier, i.e., IFPIFSC, based on multiple pseudo-similarities and presents the definitions used in the construction of IFPIFSC. Section 5 first provides the properties of 20 datasets in the UCI Machine Learning Repository (UCI-MLR) [22] used in the comparison of classifiers. In addition, it presents mathematical notations of the performance metrics. Afterward, this section compares the performance results of the fuzzy-based classifiers, i.e., Fuzzy kNN [23], Fuzzy Soft Set Classifier (FSSC) [24], Fuzzy Soft Set Classifier Using Distance-Based Similarity Measure (FussCyier) [25], Hamming Distance-Based Fuzzy Soft Set Classifier (HDFSSC) [26], FPFSCC, FPFSNHC, FPFS-EC, FPFS-AC, FPFS-CMC, FPFS-kNN(P), FPFS-kNN(S), and FPFS-kNN(K), with the performance results of IFPIFSC and the non-fuzzy-based classifiers, i.e., Support Vector Machines (SVM) [27], Decision Trees (DT) [28], Boosting Trees (BT) [29], Random Forests (RF) [30], and Adaptive Boosting (AdaBoost) [31], with those of IFPIFSC. Furthermore, this section performs the statistical evaluations of the performance results using Friedman [32] and Nemenyi [33] tests in a procedure suggested by Demšar [34] and presents the critical diagrams of the Nemenyi test. Furthermore, it compares the classifiers’ time complexities using a big O notation. The last section discusses classifiers that can be developed by distance/similarity measures of ifpifs-matrices and the need for further research.

2. Preliminaries

This section presents the concept ifpifs-matrices [21] and some of its basic properties. Throughout this study, let E be a parameter set and U be an alternative (object) set.
Definition 1 ([15]).
Let μ and ν be two functions from E to [0, 1] such that   μ ( x ) + ν ( x ) 1 , for all   x E . Then, the set   ( x , μ ( x ) , ν ( x ) ) : x E   is called an intuitionistic fuzzy set (if-set) over E.
Here, for all x E , μ ( x ) and ν ( x ) are called the membership and non-membership degrees, respectively, and π ( x ) = 1 μ ( x ) ν ( x ) is called the indeterminacy degree of the element x E . Moreover, for all x E , 0 π ( x ) 1 is straightforward. Across the present study, the set of all the if-sets over E is denoted by I F ( E ) and f I F ( E ) . For brevity, the notation ν ( x ) μ ( x ) x is used instead of ( x , μ ( x ) , ν ( x ) ) . That is, an if-set f over E is denoted by f = ν ( x ) μ ( x ) x : x E .
Definition 2 ([20]).
Let   f I F ( E )   and α be a function from f to   I F ( U ) . Then, the set
ν ( x ) μ ( x ) x , α ν ( x ) μ ( x ) x : x E
being the graphic of α, is called an ifpifs-set parameterized via E over U (or briefly over U).
Hereinafter, the set of all the ifpifs-sets over U is denoted by I F P I F S E ( U ) . Further, in I F P I F S E ( U ) , since the graph(α) and α generate each other uniquely, the notations are interchangeable. Therefore, if it causes no confusion, we denote an ifpifs-set graph(α) by α.
Definition 3 ([21]).
Let   α I F P I F S E ( U ) . Then,   [ a i j ]   is called ifpifs-matrix of α and defined by
[ a i j ] : = a 01 a 02 a 03 a 0 n a 11 a 12 a 13 a 1 n a m 1 a m 2 a m 3 a m n
such that for   i { 0 , 1 , 2 , }   and   j { 1 , 2 , } ,
a i j : = ν ( x j ) μ ( x j ) , i = 0 α ν ( x j ) μ ( x j ) x j ( u i ) , i 0
or briefly  a i j : = ν i j μ i j . Here, if  | U | = m 1  and  | E | = n , then  [ a i j ]  is an  m × n  ifpifs-matrix.
In this paper, if it causes no confusion, the membership and non-membership functions of [ a i j ] , i.e., μ i j and ν i j , will be represented by μ i j a and ν i j a , respectively. Moreover, the set of all the ifpifs-matrices parameterized via E over U is denoted by I F P I F S E [ U ] and [ a i j ] , [ b i j ] , [ c i j ] I F P I F S E [ U ] .
Definition 4 ([21]).
Let  [ a i j ] I F P I F S E [ U ] . For all i and j, if  μ i j = λ  and  ν i j = ε , then  [ a i j ]  is called  ( λ , ε ) -ifpifs-matrix and denoted by  ε λ . Here,  1 0  and  0 1  are called empty and universal ifpifs-matrices, respectively.
Definition 5 ([21]).
Let  [ a i j ] , [ b i j ] I F P I F S E [ U ] .
  • i.For all i and j, if  μ i j a = μ i j b  and  ν i j a = ν i j b , then it is said to be  [ a i j ]  and  [ b i j ]  are equal ifpifs-matrices and denoted by  [ a i j ] = [ b i j ] .
  • ii.For all i and j, if  μ i j a μ i j b  and  ν i j b ν i j a , then it is said to be  [ a i j ]  is a submatrix of  [ b i j ]  and denoted by  [ a i j ] ˜ [ b i j ] .
  • iii.If  [ a i j ] ˜ [ b i j ]  and  [ a i j ] [ b i j ] , then it is said to be  [ a i j ]  is a proper submatrix of  [ b i j ]  and denoted by  [ a i j ] ˜ [ b i j ] .

3. Distance and Similarity Measures of ifpifs-Matrices

This section defines metric, quasi-, semi-, and pseudo-metric over I F P I F S E [ U ] , proposes Minkowski, Euclidean, Hamming, generalized Hausdorff, Hausdorff pseudo-metrics, and their normalized forms, and investigates some of their basic properties. Afterward, the section defines similarity, quasi-, semi-, and pseudo-similarity over I F P I F S E [ U ] , suggests Minkowski, Euclidean, Hamming, generalized Hausdorff, Hausdorff, Jaccard, Dice, and Cosine pseudo-similarities, and examines some of their basic properties. This section theoretically contributes to the next section in which the advantages of ifpifs-matrices are employed in classification problems. In other words, this section provides the advantage of relaying the modeling capability of ifpifs-matrices to machine learning via distance and similarity measures defined over I F P I F S E [ U ] . From now on, let I n = { 1 , 2 , , n } and I n * = { 0 , 1 , 2 , , n } .

3.1. Distance Measures of ifpifs-Matrices

This subsection first defines metric, quasi-, semi-, and pseudo-metric over I F P I F S E [ U ] . Let d : X × X R be a mapping and for all x , y , z X , D1, D2, D3, D4, and D5 denote the following properties:
D1. 
d x , y 0 (Positive semi-definiteness);
D2. 
d x , x = 0 ;
D3. 
d x , y = 0 x = y ;
D4. 
d x , y = d y , x (Symmetry);
D5. 
d x , y d x , z + d z , y (Triangle inequality).
Definition 6.
Let  d : I F P I F S E [ U ] × I F P I F S E [ U ] R  be a mapping. Then,
  • i.d is called a quasi-metric iff d satisfies D1, D3, and D5.
  • ii.d is called a semi-metric iff d satisfies D1, D3, and D4.
  • iii.d is called a pseudo-metric iff d satisfies D2, D4, and D5.
  • iv.d is called a metric iff d satisfies D3, D4, and D5.
Secondly, this subsection proposes Minkowski, Euclidean, Hamming, generalized Hausdorff, and Hausdorff pseudo-metrics over I F P I F S E [ U ] and their normalized forms and investigates some of their basic properties.
Proposition 1.
Let  p Z + . Then, the mapping  d M p : I F P I F S E [ U ] × I F P I F S E [ U ] R  defined by
d M p ( [ a i j ] , [ b i j ] ) : = 1 2 i = 1 m 1 j = 1 n μ 0 j a μ i j a μ 0 j b μ i j b p + ν 0 j a ν i j a ν 0 j b ν i j b p + π 0 j a π i j a π 0 j b π i j b p 1 p
is a pseudo-metric over  I F P I F S E [ U ]  and referred to as Minkowski pseudo-metric (MPM). Furthermore, the normalized MPM is as follows:
d ^ M p ( [ a i j ] , [ b i j ] ) : = 1 2 ( m 1 ) n i = 1 m 1 j = 1 n μ 0 j a μ i j a μ 0 j b μ i j b p + ν 0 j a ν i j a ν 0 j b ν i j b p + π 0 j a π i j a π 0 j b π i j b p 1 p
Here, d M 1 and d M 2 are called Hamming pseudo-metric (HPM) and Euclidean pseudo-metric (EPM) and denoted by d H and d E , respectively. Moreover, d ^ M 1 and d ^ M 2 are called normalized HPM and normalized EPM and denoted by d ^ H and d ^ E , respectively.
Proposition 2.
Let  p Z + . Then, the mapping  d H s p : I F P I F S E [ U ] × I F P I F S E [ U ] R  defined by
d H s p ( [ a i j ] , [ b i j ] ) : = i = 1 m 1 max j I n | μ 0 j a μ i j a μ 0 j b μ i j b | p + | ν 0 j a ν i j a ν 0 j b ν i j b | p + | π 0 j a π i j a π 0 j b π i j b | p 1 p
is a pseudo-metric and referred to as generalized Hausdorff pseudo-metric (GHPM). In addition, normalized GHPM is as follows:
d ^ H s p ( [ a i j ] , [ b i j ] ) : = 1 m 1 i = 1 m 1 max j I n | μ 0 j a μ i j a μ 0 j b μ i j b | p + | ν 0 j a ν i j a ν 0 j b ν i j b | p + | π 0 j a π i j a π 0 j b π i j b | p 1 p
Here, d H s 1 is called Hausdorff pseudo-metric (HsPM) and denoted by d H s . Moreover, d ^ H s 1 is called normalized HsPM and denoted by d ^ H s .
Proposition 3.
Let  p Z + and [ a i j ] m × n , [ b i j ] m × n , [ c i j ] m × n I F P I F S E [ U ] . Then, the following properties are valid.
  • i. d M p 1 0 , 0 1 = d H s p 1 0 , 0 1 = 1 ,
  • ii. d M p [ a i j ] , [ b i j ] ( m 1 ) n p ,
  • iii. d H s p [ a i j ] , [ b i j ] m 1 p ,
  • iv. [ a i j ] ˜ [ b i j ] ˜ [ c i j ] d M p [ a i j ] , [ b i j ] d M p [ a i j ] , [ c i j ] d M p [ b i j ] , [ c i j ] d M p [ a i j ] , [ c i j ] ,
  • v. [ a i j ] ˜ [ b i j ] ˜ [ c i j ] d H s p [ a i j ] , [ b i j ] d H s p [ a i j ] , [ c i j ] d H s p [ b i j ] , [ c i j ] d H s p [ a i j ] , [ c i j ] .
Remark 1.
The propositions provided in Proposition 3 are also valid for the normalized pseudo-metrics  d ^ M p and d ^ H s p .

3.2. Similarity Measures of ifpifs-Matrices

This subsection first defines similarity, quasi-, semi-, and pseudo-similarity over I F P I F S E [ U ] . Let s : X × X R be a mapping and for all x , y , z X , S1, S2, S3, and S4 denote the following properties:
S1. 
s x , x = 1 ,
S2. 
s x , y = 1 x = y ,
S3. 
s x , y = s y , x ,
S4. 
0 s x , y 1 .
Definition 7.
Let  s : I F P I F S E [ U ] × I F P I F S E [ U ] R  be a mapping. Then,
  • i.s is called a similarity iff d satisfies S2, S3, and S4.
  • ii.s is called a quasi-similarity iff d satisfies S2 and S4.
  • iii.s is called a semi-similarity iff d satisfies S2 and S3.
  • iv.s is called a pseudo-similarity iff d satisfies S1, S3, and S4.
Secondly, this subsection proposes Minkowski, Euclidean, Hamming [35], generalized Hausdorff, and Hausdorff pseudo-similarities over I F P I F S E [ U ] using normalized pseudo-metrics of ifpifs-matrices provided in Section 3.1.
Proposition 4.
Let  p Z + . Then, the mapping  s M p : I F P I F S E [ U ] × I F P I F S E [ U ] R  defined by
s M p ( [ a i j ] , [ b i j ] ) : = 1 d ^ M p ( [ a i j ] , [ b i j ] )
is a pseudo-similarity and referred to as Minkowski pseudo-similarity (MPS).
Here, s M 1 and s M 2 are called Hamming pseudo-similarity (HPS) [35] and Euclidean pseudo-similarity (EPS) and denoted by s H and s E , respectively.
Proposition 5.
Let  p Z + . Then, the mapping  s H s p : I F P I F S E [ U ] × I F P I F S E [ U ] R  defined by
s H s p ( [ a i j ] , [ b i j ] ) : = 1 d ^ H s p ( [ a i j ] , [ b i j ] )
is a pseudo-similarity and referred to as generalized Hausdorff pseudo-similarity (GHsPS).
Here, s H s 1 is called Hausdorff pseudo-similarity (HsPS) and denoted by s H s . Thirdly, this subsection suggests Jaccard, Dice, and Cosine pseudo-similarities over I F P I F S E [ U ] .
Proposition 6.
The mapping  s J : I F P I F S E [ U ] × I F P I F S E [ U ] R  defined by
s J ( [ a i j ] , [ b i j ] ) : = 1 m 1 i = 1 m 1 ε + x i ε + y i + z i x i
such that
x i = j = 1 n μ 0 j a μ i j a μ 0 j b μ i j b + ν 0 j a ν i j a ν 0 j b ν i j b + π 0 j a π i j a π 0 j b π i j b
y i = j = 1 n μ 0 j a μ i j a 2 + ν 0 j a ν i j a 2 + π 0 j a π i j a 2
and
z i = j = 1 n μ 0 j b μ i j b 2 + ν 0 j b ν i j b 2 + π 0 j b π i j b 2
is a pseudo-similarity and referred to as Jaccard pseudo-similarity (JPS). Here,  ε 1  is a positive constant, e.g.,  ε = 0.0001 .
Proof. 
Let [ a i j ] , [ b i j ] I F P I F S E [ U ] . It is clear that s J satisfies the conditions S1 and S3. Then, it is sufficient to prove the condition S4. For i I m 1 and for all j I n ,
0 μ 0 j a μ i j a μ 0 j b μ i j b + ν 0 j a ν i j a ν 0 j b ν i j b + π 0 j a π i j a π 0 j b π i j b μ 0 j a μ i j a 2 + ν 0 j a ν i j a 2 + π 0 j a π i j a 2 + μ 0 j b μ i j b 2 + ν 0 j b ν i j b 2 + π 0 j b π i j b 2 μ 0 j a μ i j a μ 0 j b μ i j b + ν 0 j a ν i j a ν 0 j b ν i j b + π 0 j a π i j a π 0 j b π i j b
because
0 μ 0 j a μ i j a μ 0 j b μ i j b 2 + ν 0 j a ν i j a ν 0 j b ν i j b 2 + π 0 j a π i j a π 0 j b π i j b 2
Therefore,
0 ε + x i ε + y i + z i x i
Hence,
0 ε + x i ε + y i + z i x i 1
Then,
1 m 1 i = 1 m 1 0 s J ( [ a i j ] , [ b i j ] ) 1 m 1 i = 1 m 1 1 0 s J ( [ a i j ] , [ b i j ] ) 1 m 1 ( m 1 ) 0 s J ( [ a i j ] , [ b i j ] ) 1
Proposition 7.
The mapping  s D : I F P I F S E [ U ] × I F P I F S E [ U ] R  defined by
s D ( [ a i j ] , [ b i j ] ) : = 1 m 1 i = 1 m 1 ε + 2 x i ε + y i + z i
such that
x i = j = 1 n μ 0 j a μ i j a μ 0 j b μ i j b + ν 0 j a ν i j a ν 0 j b ν i j b + π 0 j a π i j a π 0 j b π i j b
y i = j = 1 n μ 0 j a μ i j a 2 + ν 0 j a ν i j a 2 + π 0 j a π i j a 2
and
z i = j = 1 n μ 0 j b μ i j b 2 + ν 0 j b ν i j b 2 + π 0 j b π i j b 2
is a pseudo-similarity and referred to as Dice pseudo-similarity (DPS). Here,  ε 1 is a positive constant, e.g., ε = 0.0001 .
Proof. 
Let [ a i j ] , [ b i j ] I F P I F S E [ U ] . It is clear that s D satisfies the conditions S1 and S3. Then, it is sufficient to prove the condition S4. For i I m 1 and for all j I n , since
0 μ 0 j a μ i j a μ 0 j b μ i j b 2 + ν 0 j a ν i j a ν 0 j b ν i j b 2 + π 0 j a π i j a π 0 j b π i j b 2 = μ 0 j a μ i j a 2 + ν 0 j a ν i j a 2 + π 0 j a π i j a 2 + μ 0 j b μ i j b 2 + ν 0 j b ν i j b 2 + π 0 j b π i j b 2 2 μ 0 j a μ i j a μ 0 j b μ i j b + ν 0 j a ν i j a ν 0 j b ν i j b + π 0 j a π i j a π 0 j b π i j b
then
0 ε + 2 x i ε + y i + z i
Hence,
0 ε + 2 x i ε + y i + z i 1
Then,
1 m 1 i = 1 m 1 0 s D ( [ a i j ] , [ b i j ] ) 1 m 1 i = 1 m 1 1 0 s D ( [ a i j ] , [ b i j ] ) 1 m 1 ( m 1 ) 0 s D ( [ a i j ] , [ b i j ] ) 1
Proposition 8.
The mapping  s C : I F P I F S E [ U ] × I F P I F S E [ U ] R  defined by
s C ( [ a i j ] , [ b i j ] ) : = 1 m 1 i = 1 m 1 ε + x i ε + y i z i
such that
x i = j = 1 n μ 0 j a μ i j a μ 0 j b μ i j b + ν 0 j a ν i j a ν 0 j b ν i j b + π 0 j a π i j a π 0 j b π i j b
y i = j = 1 n μ 0 j a μ i j a 2 + ν 0 j a ν i j a 2 + π 0 j a π i j a 2
and
z i = j = 1 n μ 0 j b μ i j b 2 + ν 0 j b ν i j b 2 + π 0 j b π i j b 2
is a pseudo-similarity and referred to as Cosine pseudo-similarity (CPS). Here,  ε 1  is a positive constant, e.g.,  ε = 0.0001 .
Proposition 9.
Let  p Z +  and  [ a i j ] m × n , [ b i j ] m × n , [ c i j ] m × n I F P I F S E [ U ] . Then, the following properties are valid.
  • i. s M p 1 0 , 0 1 = s H s p 1 0 , 0 1 = s J 1 0 , 0 1 = s D 1 0 , 0 1 = s C 1 0 , 0 1 = 0 ,
  • ii. [ a i j ] ˜ [ b i j ] ˜ [ c i j ] s M p [ a i j ] , [ c i j ] s M p [ a i j ] , [ b i j ] s M p [ a i j ] , [ c i j ] s M p [ b i j ] , [ c i j ] ,
  • iii. [ a i j ] ˜ [ b i j ] ˜ [ c i j ] s H s p [ a i j ] , [ c i j ] s H s p [ a i j ] , [ b i j ] s H s p [ a i j ] , [ c i j ] s H s p [ b i j ] , [ c i j ] ,
  • iv. [ a i j ] ˜ [ b i j ] ˜ [ c i j ] s J [ a i j ] , [ c i j ] s J [ a i j ] , [ b i j ] s J [ a i j ] , [ c i j ] s J [ b i j ] , [ c i j ] ,
  • v. [ a i j ] ˜ [ b i j ] ˜ [ c i j ] s D [ a i j ] , [ c i j ] s D [ a i j ] , [ b i j ] s D [ a i j ] , [ c i j ] s D [ a i j ] , [ c i j ] ,
  • vi. [ a i j ] ˜ [ b i j ] ˜ [ c i j ] s C [ a i j ] , [ c i j ] s C [ a i j ] , [ b i j ] s C [ a i j ] , [ c i j ] s C [ b i j ] , [ c i j ] .

4. Proposed Classifier (IFPIFSC)

This section presents the basic mathematical notations to be needed for the proposed classifier based on ifpifs-matrices. Throughout the present study, let D = [ d i j ] m × ( n + 1 ) stand for a data matrix whose last column consists of the data’s labels, where m and n represent the samples’ and parameters’ numbers in the data matrix, respectively. ( D t r a i n ) m 1 × n , C m 1 × 1 , and ( D t e s t ) m 2 × n stand for a training matrix, class matrix of the training matrix, and the testing matrix attained from the data matrix D, respectively, such that m 1 + m 2 = m . Let U k × 1 be a matrix consisting of unique class labels of C m 1 × 1 . D i t r a i n and D i t e s t denote i-th rows of D t r a i n and D t e s t , respectively. Similarly, D t r a i n j and D t e s t j denote j-th rows of D t r a i n and D t e s t , respectively. Moreover, T m 2 × 1 represents predicted class labels of the testing samples.
Definition 8.
Let  x , y R n . Then, the function  P : R n × R n [ 1 , 1 ]  defined by
P ( x , y ) : = n j = 1 n x j y j j = 1 n x j j = 1 n y j n j = 1 n x j 2 j = 1 n x j 2 n j = 1 n y j 2 j = 1 n y j 2
is called the Pearson correlation coefficient between x and y.
Definition 9.
Let  x R n  and  j I n . Then, the vector  x ^ R n  defined by
x ^ j : = x j min k I n { x k } max k I n { x k } min k I n { x k } , max k I n { x k } min k I n { x k } 1 , max k I n { x k } = min k I n { x k }
is called normalizing vector of x.
Definition 10.
Let  D = [ d i j ] m × ( n + 1 )  be a data matrix,  i I m , and  j I n . Then, the matrix  D ˜ = [ d ˜ i j ] m × n  defined by
d ˜ i j : = d i j min k I m { d k j } max k I m { d k j } min k I m { d k j } , max k I m { d k j } min k I m { d k j } 1 , max k I m { d k j } = min k I m { d k j }
is called column normalized matrix (feature-fuzzification matrix) of D.
Definition 11.
Let  ( D t r a i n ) m 1 × n  be a training matrix obtained from  D = [ d i j ] m × ( n + 1 ) ,  i I m 1 , and  j I n . Then, the matrix  D ˜ t r a i n = [ d ˜ i j t r a i n ] m 1 × n  defined by
d ˜ i j t r a i n : = d i j t r a i n min k I m { d k j } max k I m { d k j } min k I m { d k j } , max k I m { d k j } min k I m { d k j } 1 , max k I m { d k j } = min k I m { d k j }
is called column normalized matrix (feature-fuzzification matrix) of  D t r a i n .
Definition 12.
Let  ( D t e s t ) m 2 × n  be a testing matrix obtained from  D = [ d i j ] m × ( n + 1 ) ,  i I m 2 , and  j I n . Then, the matrix  D ˜ t e s t = [ d ˜ i j t e s t ] m 1 × n  defined by
d ˜ i j t e s t : = d i j t e s t min k I m { d k j } max k I m { d k j } min k I m { d k j } , max k I m { d k j } min k I m { d k j } 1 , max k I m { d k j } = min k I m { d k j }
is called column normalized matrix (feature-fuzzification matrix) of  D t e s t .
Definition 13 ([35]).
Let  D t r a i n = [ d i j t r a i n ] m 1 × n  and  C m 1 × n  be a training matrix and its class matrix obtained from a data matrix  D = [ d i j ] m × ( n + 1 ) , respectively. Then, the matrix  i f w D t r a i n λ P = ν 1 j λ P μ 1 j λ P 1 × n  is called intuitionistic fuzzification weight matrix based on Pearson correlation coefficient of  D t r a i n  and defined by
μ 1 j λ P : = 1 ( 1 | P ( D t r a i n j , C ) | ) λ
and
ν 1 j λ P : = ( 1 | P ( D t r a i n j , C ) | ) λ ( λ + 1 )
such that  j I n  and  λ [ 0 , ) .
Definition 14 ([35]).
Let  D ˜ t r a i n = [ d ˜ i j t r a i n ] m 1 × n  be a column normalized matrix of a matrix  ( D t r a i n ) m 1 × n . Then, the matrix  D t r a i n λ = [ d t r a i n i j λ ] = ν i j t r a i n D λ μ i j t r a i n D λ m 1 × n  is called intuitionistic fuzzification of  D ˜ t r a i n  and defined by
μ i j t r a i n D λ : = 1 ( 1 d ˜ i j t r a i n ) λ
and
ν i j t r a i n D λ : = ( 1 d ˜ i j t r a i n ) λ ( λ + 1 )
such that  i I m 1 ,  j I n , and  λ [ 0 , ) .
Definition 15 ([35]).
Let  D ˜ t e s t = [ d ˜ i j t e s t ] m 2 × n  be a column normalized matrix of a matrix  ( D t e s t ) m 2 × n . Then, the matrix  D t e s t λ = [ d t e s t i j λ ] = ν i j t e s t D λ μ i j t e s t D λ m 2 × n  is called intuitionistic fuzzification of  D ˜ t e s t  and defined by
μ i j t e s t D λ : = 1 ( 1 d ˜ i j t e s t ) λ
and
ν i j t e s t D λ : = ( 1 d ˜ i j t e s t ) λ ( λ + 1 )
such that  i I m 2 ,  j I n , and  λ [ 0 , ) .
Definition 16 ([35]).
Let  ( D ˜ t r a i n ) m 1 × n  be a column normalized matrix of a matrix  ( D t r a i n ) m 1 × n  and  D t r a i n λ = [ d t r a i n i j λ ] = ν i j t r a i n D λ μ i j t r a i n D λ m 1 × n  be the intuitionistic fuzzification of  D ˜ t r a i n . Then, the ifpifs-matrix  b i j D k t r a i n λ 2 × n  is called the training ifpifs-matrix obtained by k-th row of  D t r a i n λ  and  i f w D t r a i n λ P  and defined by
b 0 j D k t r a i n λ : = ν 1 j λ P μ 1 j λ P and b 1 j D k t r a i n λ : = ν k j t r a i n D λ μ k j t r a i n D λ
such that  k I m 1  and  j I n .
Definition 17 ([35]).
Let  ( D ˜ t e s t ) m 2 × n  be a column normalized matrix of a matrix  ( D t e s t ) m 2 × n  and  D t e s t λ = [ d t e s t i j λ ] = ν i j t e s t D λ μ i j t e s t D λ m 2 × n  be the intuitionistic fuzzification of  D ˜ t e s t . Then, the ifpifs-matrix  a i j D k t e s t λ 2 × n  is called the testing ifpifs-matrix obtained by k-th row of  D t e s t λ  and  i f w D t r a i n λ P  and defined by
a 0 j D k t e s t λ : = ν 1 j λ P μ 1 j λ P and a 1 j D k t e s t λ : = ν k j t e s t D λ μ k j t e s t D λ
such that  k I m 1  and  j I n .
Afterward, this section proposes a new classifier named IFPIFSC. This classifier utilizes Definition 13 to attain a parameter effect-based feature weight on classification. It then built the training ifpifs-matrix and the testing ifpifs-matrix using Definitions 11, 12, and 14–17. Later, employing HPS, EPS, MPS, HsPS, JPS, and CPS, a matrix of similarity values of the testing ifpifs-matrix to each training ifpifs-matrix is obtained. For each pseudo-similarity, the class of the training sample with the highest similarity is found, and the class with the highest frequency value is determined and assigned to the test sample. Similarly, this procedure repeats for all test samples. Lastly, the predicted class matrix is generated for the test data. IFPIFSC’s flowchart (Figure 1) and algorithm steps (Algorithm 1) are as follows:
Algorithm 1 IFPIFSC’s pseudocode
Input:  ( D t r a i n ) m 1 × n , C m 1 × 1 , ( D t e s t ) m 2 × n , λ 1 , and λ 2
Output: T m 2 × 1
1:
procedure IFPIFSC( D t r a i n , C, D t e s t , λ 1 , λ 2 )
2:
  Compute i f w D t r a i n λ 1 P using D t r a i n and C
3:
  Compute feature fuzzification of D t r a i n and D t e s t , namely D ˜ t r a i n and D ˜ t e s t
4:
  Compute feature intuitionistic fuzzification of D ˜ t r a i n and D ˜ t e s t , namely D t r a i n λ 2 and D t e s t λ 2
5:
  for k from 1 to m 2  do
6:
    Compute the testing ifpifs-matrix a i j D k t e s t λ 2 2 × n using i f w D t r a i n λ 1 P and D k t e s t λ 2
7:
    for l from 1 to m 1  do
8:
     Compute the training ifpifs-matrix b i j D k t r a i n λ 2 2 × n using i f w D t r a i n λ 1 P and D l t r a i n λ 2
9:
      s m l 1 s H a i j D k t e s t λ 2 , b i j D k t r a i n λ 2  
10:
      s m l 2 s E a i j D k t e s t λ 2 , b i j D k t r a i n λ 2  
11:
      s m l 3 s M 3 a i j D k t e s t λ 2 , b i j D k t r a i n λ 2  
12:
      s m l 4 s H s a i j D k t e s t λ 2 , b i j D k t r a i n λ 2  
13:
      s m l 5 s J a i j D k t e s t λ 2 , b i j D k t r a i n λ 2  
14:
      s m l 6 s C a i j D k t e s t λ 2 , b i j D k t r a i n λ 2
15:
    end for
16:
     F m 1 × s Sorted class matrix of [ s m l s ] in descending order in terms of each pseudo-similarity
17:
    for i from 1 to m 1  do
18:
      F i i -th row of F
19:
      w mode ( F i )
20:
     if mode(Fi) is unique then
21:
     break
22:
     end if
23:
    end for
24:
     t k 1 w
25:
  end for
26:
  return  T m 2 × 1
27:
end procedure

5. Simulation and Performance Comparison

The present section provides the details of the 20 datasets in the UCI-MLR [22] for the classification task. It then presents five performance metrics to be used for performance comparison. Afterward, this section executes a simulation to demonstrate that IFPIFSC exhibits a better-classifying performance than Fuzzy kNN [23], FSSC [24], FussCyier [25], HDFSSC [26], FPFSCC [10], FPFSNHC [9], FPFS-EC [11], FPFS-AC [13], FPFS-CMC [12], FPFS-kNN(P) [14], FPFS-kNN(S) [14], FPFS-kNN(K) [14], SVM [27], DT [28], BT [29], RF [30], and AdaBoost [31] do. Moreover, it performs statistical analyzes of the simulation results employing the Friedman test [32], a non-parametric test, and the Nemenyi test [33], a post hoc test. Finally, this section provides the time complexities of the compared classifiers in compliance with big O notation.

5.1. UCI Datasets and Features

This subsection presents the properties of the following datasets [22], used in the simulation, in Table 1: “Zoo”, “Breast Tissue”, “Teaching Assistant Evaluation”, “Wine”, “Parkinsons[sic] ”, “Sonar”, “Seeds”, “Parkinson Acoustic”, “Ecoli”, “Leaf”, “Ionosphere”, “Libras Movement”, “Dermatology”, “Breast Cancer Wisconsin”, “HCV Data”, “Parkinson’s Disease Classification”, “Mice Protein Expression”, “Semeion Handwritten Digit”, “Car Evaluation”, and “Wireless Indoor Localization”.

5.2. Performance Metrics

This subsection provides the mathematical notations of five performance metrics [36,37,38], i.e., accuracy (Acc), precision (Pre), recall (Rec), macro F-score (MacF), and micro F-score (MicF), to compare the aforementioned classifiers. Let D t e s t = { x 1 , x 2 , , x n } , T = { t 1 , t 2 , , t n } , T = { t 1 , t 2 , , t n } , and l be n samples to be classified, ground truth class sets of the samples, prediction class sets of the samples, and the number of the class of the samples, respectively.
Acc ( T , T ) : = 1 l i = 1 l T P i + T N i T P i + T N i + F P i + F N i Pre ( T , T ) : = 1 l i = 1 l T P i T P i + F P i Rec ( T , T ) : = 1 l i = 1 l T P i T P i + F N i MacF ( T , T ) : = 1 l i = 1 l 2 T P i 2 T P i + F P i + F N i MicF ( T , T ) : = 2 i = 1 l T P i 2 i = 1 l T P i + i = 1 l F P i + i = 1 l F N i
where T P i , T N i , F P i , and F N i are the number of true positive, true negative, false positive, and false negative, for the class i, respectively, and their mathematical notations are as follows:
T P i : = x k | i T k i T k , 1 k l T N i : = x t | i T k i T k , 1 k l F P i : = x t | i T k i T k , 1 k l F N i : = x t | i T k i T k , 1 k l
Here, the notation | . | denotes the cardinality of a set.

5.3. Simulation Results

This subsection compares IFPIFSC with the state-of-the-art and well-known classifiers rely on fuzzy and soft sets, i.e., Fuzzy 3NN, FussCyier, FSSC, HDFSSC, FPFSCC, FPFSNHC, FPFS-EC, FPFS-AC, FPFS-CMC, FPFS-3NN(P), FPFS-3NN(S), and FPFS-3NN(K), and other well-known classifiers, i.e., DT, SVM, BT, RF, and AdaBoost, by utilizing MATLAB R2021b and a laptop with I(R) Core(TM) I5-3230M CPU @ 2.60 GHz and 16 GB RAM. Here, the mean performance results of the classifiers are obtained by random 10 independent runs based on the 5-fold cross-validation [38,39]. In each cross-validation, the relevant dataset is randomly split into five parts, and four of the parts are used for training and the other for testing (for more details about k-fold cross-validation, see [39]). Table 2 provides the average Acc, Pre, Rec, MacF, and MicF results of IFPIFSC, Fuzzy 3NN, FSSC, FussCyier, HDFSSC, FPFSCC, FPFSNHC, FPFS-EC, FPFS-AC, FPFS-CMC, FPFS-3NN(P), FPFS-3NN(S), and FPFS-3NN(K) for the datasets.
Table 2 manifests that IFPIFSC exactly classifies the dataset “Mice Protein Expression” just as HDFSSC, FPFS-EC, FPFS-AC, FPFS-CMC, FPFS-3NN(P), FPFS-3NN(S), and FPFS-3NN(K) do. Moreover, in compliance with all performance metrics, the performance results of IFPIFSC for “Ionosphere”, “Zoo”, “Car Evaluation”, “Semeion Handwritten Digit”, “Parkinson’s Disease Classification”, “Seeds”, “Parkinsons[sic]”, “Breast Cancer Wisconsin”, “Dermatology”, “Wine”, and “Wireless Indoor Localization” are over 89%, 89%, 90%, 92%, 92%, 93%, 93%, 95%, 96%, 97%, and 98%, respectively. In addition, IFPIFSC produces the best results in all performance metrics in “Breast Tissue”, “Wine”, “Sonar” (except for Pre value), “Seeds”, “Leaf”, “Ionosphere” (except for Pre value), “Libras Movement”, “Parkinson’s Disease Classification”, “Semeion Handwritten Digit” (except for Pre value), “Car Evaluation” (except for Rec and MacF values), and “Wireless Indoor Localization”. Although IFPIFSC does not produce the best results in all performance metrics in “Parkinsons[sic]”, “Parkinson Acoustic”, and “HCV Data”, it generates the closest results to the best ones for these datasets, except for the Pre value in “Parkinsons[sic]” and the Rec and MacF values in “HCV Data”. Consequently, the mean performance results in Table 2 indicate that IFPIFSC is a more efficient classifier than other classifiers on the considered datasets.
IFPIFSC achieves exceptional classification performance due to its utilizing HPS, EPS, MPS, HsPS, JPS, and CPS over ifpifs-matrices space and Pearson correlation coefficient-based feature weight. Moreover, Table 3 consists of ranking numbers of the best results, while Table 4 includes a pairwise comparison of the ranking results.
Afterward, Table 5 provides the average Acc, Pre, Rec, MacF, and MicF results of IFPIFSC, SVM, DT, BT, RF, and AdaBoost for the datasets. Table 5 shows that IFPIFSC exactly classifies the dataset “Mice Protein Expression” just as SVM, DT, and RF do. Furthermore, according to all performance metrics, the performance results of IFPIFSC for “Ionosphere”, “Zoo”, “Car Evaluation”, “Semeion Handwritten Digit”, “Parkinson’s Disease Classification”, “Parkinsons[sic]”, “Seeds”, “Breast Cancer Wisconsin”, “Dermatology”, “Wine”, and “Wireless Indoor Localization” are over 89%, 89%, 90%, 92%, 92%, 92%, 93%, 95%, 96%, 96%, and 98%, respectively. In addition, IFPIFSC produces the best results in all performance metrics in “Breast Tissue”, “Teaching Assistant Evaluation” (except for Pre value), “Parkinsons[sic]”, “Sonar”, “Seeds”, “Parkinson Acoustic”, “Libras Movement”, “Parkinson’s Disease Classification”, and “Wireless Indoor Localization”. Moreover, though IFPIFSC does not generate the best results in all performance metrics in “Wine”, “Leaf”, and “Dermatology”, it produces the closest results to the best ones for these datasets. As a result, the mean performance results in Table 5 demonstrate that IFPIFSC is a more efficacious classifier than other classifiers on the considered datasets. Moreover, Table 6 consists of ranking numbers of the best results, while Table 7 includes a pairwise comparison of the ranking results.

5.4. Statistical Evaluation

The present subsection performs the Friedman test [32], a non-parametric test, and the Nemenyi test [33], a post hoc test, in a procedure proposed by Demšar [34] to analyze all performance results acquired in view of Acc, Pre, Rec, MacF, and MicF. The Friedman test generates a performance-based ranking of the classifiers for each dataset. Thus, the rank of 1 implies the best-performing classifier, the rank of 2 to the second best, etc. If the performances of the classifiers are equal, then it assigns the average of their possible ranks to their ranks. It then compares the average ranks and calculates χ F 2 , distributed with k 1 degree of freedom. Here, k denotes the classifiers’ number. Afterward, a post hoc test, e.g., the Nemenyi test, is employed to determine the differences between the classifiers. The determined differences between any two classifiers more than critical distance are accepted as statistically significant.
This subsection calculates each classifier’s average ranking using the Friedman test. Here, the number of fuzzy-based classifiers compared with IFPIFSC is 12, i.e., k = 13 , and the number of datasets is 20, i.e., N = 20 . Friedman test statistics of Acc, Pre, Rec, MacF, and MicF results, χ F 2 = 108.60 , χ F 2 = 106.69 , χ F 2 = 90.48 , χ F 2 = 108.51 , and χ F 2 = 110.43 , respectively. For k = 13 and N = 20 , the Friedman test critical value is 21.03 at the α = 0.05 significance level (for more details, see [40]). Since the Friedman test statistics of Acc (108.60), Pre (106.69), Rec (90.48), MacF (108.51), and MicF (110.43) are greater than the critical value 21.03; there is a significant difference between the performances of the classifiers. Hence, the null hypothesis “There are no performance differences between the classifiers” is rejected, and, thus, the Nemenyi test can be applied. For k = 13 , N = 20 , and α = 0.05 , since the critical value for the infinite degrees of freedom in the table Studentized Range q is c v = 4.286 , the critical distance is c d = c v 2 × k × ( k + 1 ) 6 × N = 4.286 2 × 8 × ( 8 + 1 ) 6 × 20 2.348 according to the Nemenyi test. The critical diagrams produced by the Nemenyi test for the five performance metrics are presented in Figure 2. Figure 2 manifests that the performance differences between the average rankings of IFPIFSC and those of FPFS-CMC, FPFCC, Fuzzy kNN, FPFS-NHC, FSSC, FussCyier, and HDFSSC, are greater than the critical distance (4.0798). Figure 2 shows that even though the difference between the average rankings of IFPIFSC and FPFS-EC, FPFS-AC, FPFS-3NN(P), FPFS-3NN(S), and FPFS-3NN(K) is less than the critical distance (4.0798), IFPIFSC performs better than them concerning all performance metrics. Therefore, IFPIFSC outperforms the others statistically for all five performance metrics.
Secondly, this subsection calculates each classifier’s average ranking using the Friedman test. Here, the number of non-fuzzy-based classifiers compared with IFPIFSC is 5, i.e., k = 6 , and the number of datasets is 20, i.e., N = 20 . Friedman test statistics of Acc, Pre, Rec, MacF, and MicF results, χ F 2 = 48.65 , χ F 2 = 45.28 , χ F 2 = 39.93 , χ F 2 = 45.64 , and χ F 2 = 48.65 , respectively. For k = 6 and N = 20 , the Friedman test critical value is 11.07 at the α = 0.05 significance level (for more details, see [40]). Since the Friedman test statistics of Acc (48.65), Pre (45.28), Rec (39.93), MacF (45.64), and MicF (48.65) are greater than the critical value 11.07; there is a significant difference between the performances of the classifiers. Thereby, the null hypothesis “There are no performance differences between the classifiers” is rejected, and, thus, the Nemenyi test can be applied. For k = 6 , N = 20 , and α = 0.05 , since the value for the infinite degrees of freedom in the table Studentized Range q is 4.030 , the critical distance is 4.030 2 × 6 × ( 6 + 1 ) 6 × 20 1.686 according to the Nemenyi test. The critical diagrams generated by the Nemenyi test for the five performance metrics are presented in Figure 3.
Figure 3 demonstrates that the performance differences between the average rankings of IFPIFSC and those of AdaBoost (MacF), SVM, and DT are greater than the critical distance (1.686). In addition, Figure 3 indicates that IFPIFSC realizes better than RF, BT, and AdaBoost in terms of all performance metrics, although the difference between the average rankings of IFPIFSC, RF, BT, and AdaBoost is less than the critical distance (1.686). Therefore, IFPIFSC outperforms the others statistically for all five performance metrics.

5.5. Comparison of the Time Complexity

The present section compares the time complexities of the classifiers by employing a big O notation. From the pseudocode of IFPIFSC, the time complexity is O ( m n ) since m n is higher than m 6 for each test sample. Here, m and n are the numbers of the training samples and of their attributes, respectively. The time complexities, big O notation herein, of the compared classifiers are presented in Table 8.

6. Conclusions

This study defined the concepts metrics, quasi-, semi-, and pseudo-metrics and similarities, quasi-, semi-, and pseudo-similarities over ifpifs-matrices. Thus, it theoretically contributed to the literature. Next, this study suggested five pseudo-metrics and seven pseudo-similarities over ifpifs-matrices. Hence, it confirmed the existence of the aforementioned contribution. Later, this study propounded IFPIFSC simultaneously using six of the proposed pseudo-similarities and applied it to a data classification problem. In this way, this study clarified how to construct ifpifs-matrices and apply them to real problems in data classification. Furthermore, it compared IFPIFSC with the well-known and state-of-the-art classifiers Fuzzy kNN, FSSC, FussCyier, HDFSSC, FPFSCC, FPFSNHC, FPFS-EC, FPFS-AC, FPFS-CMC, FPFS-kNN(P), FPFS-kNN(S), FPFS-kNN(K), SVM, DT, BT, RF, and AdaBoost and statistically analyzed the comparison results. Thereby, the present study manifested that the proposed method has the best performance results and, thus, is a pretty convenient method in supervised learning.
The success of the available classifiers has natural limits depending on datasets. Therefore, IFPIFSC has been designed to cope with these drawbacks. This classifier allows using novel multiple-similarity functions and threshold values. By this means, IFPIFSC is open to improvement: one of the ways to improve this proposed classifier is to define or use different similarity measures over ifpifs-matrices. The second is to adapt the values λ 1 and λ 2 used in the intuitionistic fuzzification of real data. The third is to use SDM methods constructed by fpfs- or ifpifs-matrices to use multiple pseudo-similarities similar to FPFS-AC and FPFS-CMC [12,13]. The fourth, to reduce the negative effects of the inconsistent data in the used datasets on the classification success, is to develop an effective preprocessing step that eliminates or excludes inconsistent data from the evaluation using rough sets [41,42]. The fifth is to develop similar classifiers constructed by interval-valued intuitionistic fuzzy parameterized interval-valued intuitionistic fuzzy soft matrices [43] modeling further uncertainties than intuitionistic fuzzy uncertainties. To struggle with different uncertainties, the sixth is to consider the new concepts, such as picture fuzzy sets [44,45], Pythagorean fuzzy sets [46,47], Fermatean fuzzy sets [48], q-rung orthopair fuzzy sets [49,50], T-spherical fuzzy sets [51,52], interval-valued fuzzy sets [53,54], interval-valued intuitionistic fuzzy sets [55], and bipolar fuzzy sets [56,57,58]. Finally, IFPIFSC can be customized to produce nearly 100% performance, especially for medical diagnosis problems. Classifiers whose codes are not shared privately or on online platforms, such as MathWorks and GitHub, are not included in the scope of this study.

Author Contributions

Ç.C. directed the project and supervised this study’s findings. B.A. and T.A. devised the main conceptual ideas and developed the theoretical framework. S.M. and B.A. performed the experiment and statistical analyzes. S.M. wrote the manuscript with support from B.A., T.A. and S.E. S.E. reviewed and edited the paper. All the authors discussed the results and contributed to the final paper. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Office of Scientific Research Projects Coordination at Çanakkale Onsekiz Mart University, Grant number: FHD-2020-3465.

Data Availability Statement

The datasets employed and analyzed during the present study are available from the UCI-MLR.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the study’s design; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

fpfs-set [7]Fuzzy Parameterized Fuzzy Soft Set
fpfs-matrix [8]Fuzzy Parameterized Fuzzy Soft Matrix
ifs-set [17]Intuitionistic Fuzzy Soft Set
ifps-set [18]Intuitionistic Fuzzy Parameterized Soft Set
ifpfs-set [19]Intuitionistic Fuzzy Parameterized Fuzzy Soft Set
ifpifs-set [20]Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Set
ifpifs-matrix [21]Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Matrix
SDMSoft Decision-Making
kNN [59,60]k-Nearest Neighbor
Fuzzy kNN [23]Fuzzy k-Nearest Neigbors
FSSC [24]Fuzzy Soft Set Classifier
FussCyier [25]Fuzzy Soft Set Classifier Using Distance-Based Similarity Measure
HDFSSC [26]Hamming Distance-Based Fuzzy Soft Set Classifier
FPFSCC [10]Fuzzy Parameterized Fuzzy Soft Chebyshev Classifier
FPFSNHC [9]Fuzzy Parameterized Fuzzy Soft Normalized Hamming Classifier
FPFS-EC [11]Fuzzy Parameterized Fuzzy Soft Euclidean Classifier
FPFS-CMC [12]Comparison Matrix-Based Fuzzy Parameterized Fuzzy Soft Classifier
FPFS-AC [13]Fuzzy Parameterized Fuzzy Soft Aggregation Classifier
FPFS-kNN [14]              Fuzzy Parameterized Fuzzy Soft k-Nearest Neighbor
SVM [27]Support Vector Machines
DT [28]Decision Trees
BT [29]Boosting Trees
AdaBoost [31]Adaptive Boosting
RF [30]Random Forests
IFPIFSC (In this paper)Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Classifier
UCI-MLR [22]UC Irvine Machine Learning Repository
AccAccuracy
PrePrecision
RecRecall
MacFMacro F-score
MicFMicro F-score

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef]
  2. Wu, H.C. Mathematical Foundations of Fuzzy Sets; Wiley: Hoboken, NJ, USA, 2023. [Google Scholar]
  3. Molodtsov, D. Soft set theory—First results. Comput. Math. Appl. 1999, 37, 19–31. [Google Scholar] [CrossRef]
  4. Molodtsov, D.A. Soft Set Theory; URSS: Moscow, Russia, 2004. (In Russian) [Google Scholar]
  5. Maji, P.K.; Biswas, R.; Roy, A.R. Soft set theory. Comput. Math. Appl. 2003, 45, 555–562. [Google Scholar] [CrossRef]
  6. Maji, P.K.; Biswas, R.; Roy, A.R. Fuzzy soft sets. J. Fuzzy Math. 2001, 9, 589–602. [Google Scholar]
  7. Çağman, N.; Çıtak, F.; Enginoğlu, S. Fuzzy parameterized fuzzy soft set theory and its applications. Turk. J. Fuzzy Syst. 2010, 1, 21–35. [Google Scholar]
  8. Enginoğlu, S.; Çağman, N. Fuzzy parameterized fuzzy soft matrices and their application in decision-making. TWMS J. Apl. Eng. Math. 2020, 10, 1105–1115. [Google Scholar]
  9. Memiş, S.; Enginoğlu, S.; Erkan, U. A data classification method in machine learning based on normalised Hamming pseudo-similarity of fuzzy parameterized fuzzy soft matrices. Bilge Int. J. Sci. Technol. Res. 2019, 3, 1–8. [Google Scholar] [CrossRef]
  10. Memiş, S.; Enginoğlu, S. An Application of Fuzzy Parameterized Fuzzy Soft Matrices in Data Classification. In Proceedings of the International Conferences on Natural Sciences and Technology, Prizren, Kosovo, 26–30 August 2019; Kılıç, M., Özkan, K., Karaboyacı, M., Taşdelen, K., Kandemir, H., Beram, A., Eds.; University of Prizren: Prizren, Kosovo, 2019; pp. 68–77. [Google Scholar]
  11. Memiş, S.; Enginoğlu, S.; Erkan, U. Numerical data classification via distance-based similarity measures of fuzzy parameterized fuzzy soft matrices. IEEE Access 2021, 9, 88583–88601. [Google Scholar] [CrossRef]
  12. Memiş, S.; Enginoğlu, S.; Erkan, U. A classification method in machine learning based on soft decision-making via fuzzy parameterized fuzzy soft matrices. Soft Comput. 2022, 1165–1180. [Google Scholar] [CrossRef]
  13. Memiş, S.; Enginoğlu, S.; Erkan, U. A new classification method using soft decision-making based on an aggregation operator of fuzzy parameterized fuzzy soft matrices. Turk. J. Elec. Eng. Comp. Sci. 2022, 30, 871–890. [Google Scholar] [CrossRef]
  14. Memiş, S.; Enginoğlu, S.; Erkan, U. Fuzzy parameterized fuzzy soft k-nearest neighbor classifier. Neurocomputing 2022, 500, 351–378. [Google Scholar] [CrossRef]
  15. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  16. Atanassov, K.T. On Intuitionistic Fuzzy Sets Theory; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  17. Maji, P.K.; Biswas, R.; Roy, A.R. Intuitionistic fuzzy soft sets. J. Fuzzy Math. 2001, 9, 677–692. [Google Scholar]
  18. Deli, İ.; Çağman, N. Intuitionistic fuzzy parameterized soft set theory and its decision making. Appl. Soft Comput. 2015, 28, 109–113. [Google Scholar] [CrossRef]
  19. El-Yagubi, E.; Salleh, A.R. Intuitionistic fuzzy parameterised fuzzy soft set. J. Qual. Meas. Anal. 2013, 9, 73–81. [Google Scholar]
  20. Karaaslan, F. Intuitionistic fuzzy parameterized intuitionistic fuzzy soft sets with applications in decision making. Ann. Fuzzy Math. Inform. 2016, 11, 607–619. [Google Scholar]
  21. Enginoğlu, S.; Arslan, B. Intuitionistic fuzzy parameterized intuitionistic fuzzy soft matrices and their application in decision-making. Comput. Appl. Math. 2020, 39, 325. [Google Scholar] [CrossRef]
  22. Dua, D.; Graff, C. UCI Machine Learning Repository. Intell. Control. Autom. 2019, 10, 4. [Google Scholar]
  23. Keller, J.M.; Gray, M.R.; Givens, J.A. A fuzzy k-nearest neighbor algorithm. IEEE Trans. Syst. Man Cybern. 1985, 15, 580–585. [Google Scholar] [CrossRef]
  24. Handaga, B.; Onn, H.; Herawan, T. FSSC: An algorithm for classifying numerical data using fuzzy soft set theory. Int. J. Fuzzy Syst. Appl. 2012, 3, 29–46. [Google Scholar] [CrossRef]
  25. Lashari, S.A.; Ibrahim, R.; Senan, N. Medical data classification using similarity measure of fuzzy soft set based distance measure. J. Telecommun. Electron. Comput. Eng. 2017, 9, 95–99. [Google Scholar]
  26. Yanto, I.T.R.; Seadudin, R.R.; Lashari, S.A.; Haviluddin. A Numerical Classification Technique Based on Fuzzy Soft Set using Hamming Distance. In Proceedings of the Third International Conference on Soft Computing and Data Mining, Johor, Malaysia, 6–7 February 2018; Ghazali, R., Deris, M.M., Nawi, N.M., Abawajy, J.H., Eds.; Springer: Johor, Malaysia, 2018; pp. 252–260. [Google Scholar]
  27. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  28. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees, 3rd ed.; CRC Press: Wadsworth, OH, USA, 1998. [Google Scholar]
  29. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  30. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  31. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
  32. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  33. Nemenyi, P.B. Distribution-Free Multiple Comparisons; Princeton University: Princeton, NJ, USA, 1963. [Google Scholar]
  34. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  35. Memiş, S.; Arslan, B.; Aydın, T.; Enginoğlu, S.; Camcı, Ç. A classification method based on Hamming pseudo-similarity of intuitionistic fuzzy parameterized intuitionistic fuzzy soft matrices. J. New Results Sci. 2021, 10, 59–76. [Google Scholar]
  36. Fawcett, T. An introduction to ROC analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
  37. Nguyen, T.T.; Dang, M.T.; Luong, A.V.; Liew, A.W.C.; Liang, T.; McCall, J. Multi-label classification via incremental clustering on an evolving data stream. Pattern Recognit. 2019, 95, 96–113. [Google Scholar] [CrossRef]
  38. Erkan, U. A precise and stable machine learning algorithm: Eigenvalue classification (EigenClass). Neural. Comput. Appl. 2021, 33, 5381–5392. [Google Scholar] [CrossRef]
  39. Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B Stat. Methodol. 1974, 36, 111–147. [Google Scholar] [CrossRef]
  40. Zar, J.H. Biostatistical Analysis, 5th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010; p. 672. [Google Scholar]
  41. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  42. Akram, M.; Zafar, F. Soft Rough Fuzzy Graphs. In Hybrid Soft Computing Models Applied to Graph Theory; Springer International Publishing: Cham, Switzerland, 2020; pp. 323–352. [Google Scholar]
  43. Aydın, T.; Enginoğlu, S. Interval-valued intuitionistic fuzzy parameterized interval-valued intuitionistic fuzzy soft matrices and their application to performance-based value assignment to noise removal filters. Comput. Appl. Math. 2022, 41, 192. [Google Scholar] [CrossRef]
  44. Cuong, B.C. Picture fuzzy sets. J. Comput. Sci. Cybern. 2014, 30, 409–420. [Google Scholar]
  45. Memiş, S. Another view on picture fuzzy soft sets and their product operations with soft decision-making. J. New Theory 2022, 38, 1–13. [Google Scholar] [CrossRef]
  46. Yang, W. New Similarity Measures for Soft Sets and Their Application. Fuzzy Inf. Eng. 2013, 1, 19–25. [Google Scholar] [CrossRef]
  47. Garg, H.; Deng, Y.; Ali, Z.; Mahmood, T. Decision-making strategy based on Archimedean Bonferroni mean operators under complex Pythagorean fuzzy information. Comp. Appl. Math. 2022, 41, 15240. [Google Scholar] [CrossRef]
  48. Senapati, T.; Yager, R.R. Fermatean fuzzy sets. J. Ambient. Intell. Human. Comput. 2020, 11, 663–674. [Google Scholar] [CrossRef]
  49. Yager, R.R. Generalized orthopair fuzzy sets. IEEE Trans. Fuzzy. Syst. 2017, 25, 1222–1230. [Google Scholar] [CrossRef]
  50. Farid, H.M.A.; Riaz, M. q-rung orthopair fuzzy Aczel–Alsina aggregation operators with multi-criteria decision-making. Eng. Appl. Artif. Intell. 2023, 122, 106105. [Google Scholar] [CrossRef]
  51. Mahmood, T.; Ullah, K.; Khan, Q.; Jan, N. An approach toward decision-making and medical diagnosis problems using the concept of spherical fuzzy sets. Neural. Comput. Appl. 2019, 31, 7041–7053. [Google Scholar] [CrossRef]
  52. Farid, H.M.A.; Riaz, M.; Khan, Z.A. T-spherical fuzzy aggregation operators for dynamic decision-making with its application. Alex. Eng. J. 2023, 72, 97–115. [Google Scholar] [CrossRef]
  53. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  54. Gorzałczany, M.B. A method of inference in approximate reasoning based on interval-valued fuzzy sets. Fuzzy Sets Syst. 1987, 21, 1–17. [Google Scholar] [CrossRef]
  55. Atanassov, K.; Gargov, G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  56. Zhang, W.R. Bipolar Fuzzy Sets and Relations: A Computational Framework for Cognitive Modeling and Multiagent Decision Analysis. In Proceedings of the NAFIPS/IFIS/NASA ’94 First International Joint Conference of The North American Fuzzy Information Processing Society Biannual Conference, The Industrial Fuzzy Control and Intelligent Systems, San Antonio, CA, USA, 18–21 December 1994; pp. 305–309. [Google Scholar]
  57. Mahmood, T. A novel approach toward bipolar soft sets and their applications. J. Math. 2020, 2020, 4690808. [Google Scholar] [CrossRef]
  58. Deli, İ.; Karaaslan, F. Bipolar FPSS-tsheory with applications in decision making. Afr. Mat. 2020, 31, 493–505. [Google Scholar] [CrossRef]
  59. Fix, E.; Hodges, J.L. Discriminatory Analysis, Nonparametric Discrimination: Consistency Properties; USAF School of Aviation Medicine, Randolph Field: Universal City, TX, USA, 1951. [Google Scholar]
  60. Cover, T.M.; Hart, P.E. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. 1967, 13, 21–27. [Google Scholar] [CrossRef]
Figure 1. IFPIFSC’s flowchart.
Figure 1. IFPIFSC’s flowchart.
Axioms 12 00463 g001
Figure 2. The critical diagrams obtained by the Friedman test and Nemenyi test at 0.05 significance level for the five performance criteria (for fuzzy-based classifiers).
Figure 2. The critical diagrams obtained by the Friedman test and Nemenyi test at 0.05 significance level for the five performance criteria (for fuzzy-based classifiers).
Axioms 12 00463 g002
Figure 3. The critical diagrams obtained by the Friedman test and Nemenyi test at 0.05 significance level for the five performance criteria (for non-fuzzy-based classifiers).
Figure 3. The critical diagrams obtained by the Friedman test and Nemenyi test at 0.05 significance level for the five performance criteria (for non-fuzzy-based classifiers).
Axioms 12 00463 g003
Table 1. Descriptions of UCI datasets.
Table 1. Descriptions of UCI datasets.
No.Name#Instance#Attribute#ClassBalanced/Imbalanced
1Zoo101167Imbalanced
2Breast Tissue10696Imbalanced
3Teaching Assistant Evaluation15153Imbalanced
4Wine178133Imbalanced
5Parkinsons[sic]195222Imbalanced
6Sonar208602Imbalanced
7Seeds21073Balanced
8Parkinson Acoustic240462Balanced
9Ecoli33678Imbalanced
10Leaf3401436Imbalanced
11Ionosphere351342Imbalanced
12Libras Movement3609015Balanced
13Dermatology366346Imbalanced
14Breast Cancer Wisconsin569302Imbalanced
15HCV Data589125Imbalanced
16Parkinson’s Disease Classification7567542Imbalanced
17Mice Protein Expression1077728Imbalanced
18Semeion Handwritten Digit15932652Imbalanced
19Car Evaluation172864Imbalanced
20Wireless Indoor Localization200074Balanced
# stands for “the number of”.
Table 2. Simulation results of the fuzzy-based classifiers.
Table 2. Simulation results of the fuzzy-based classifiers.
DatasetsClassifiersAcc ± SDPre ± SDRec ± SDMacF ± SDMicF ± SD
ZooFuzzy 3NN97.63 ± 1.4290.41 ± 7.2284.13 ± 10.292.05 ± 5.7791.77 ± 4.98
FSSC97.97 ± 1.3290.03 ± 9.1386.56 ± 9.4693.25 ± 4.9293.06 ± 4.51
FussCyier97.74 ± 1.4289.39 ± 9.2386.27 ± 9.4692.68 ± 5.2192.26 ± 4.85
HDFSSC98.29 ± 1.491.72 ± 8.1587.45 ± 10.9393.48 ± 5.294.15 ± 4.79
FPFSCC97.17 ± 2.1388.27 ± 10.1182.05 ± 12.4189.22 ± 7.8790.27 ± 7.49
FPFSNHC98.29 ± 1.4392 ± 8.5387.17 ± 11.3593.26 ± 5.8194.15 ± 4.9
FPFS-EC98.85 ± 1.1294.34 ± 6.9889.86 ± 10.2496.6 ± 4.1796.04 ± 3.88
FPFS-AC98.36 ± 1.391.66 ± 8.1585.9 ± 10.9494.94 ± 5.4294.35 ± 4.49
FPFS-CMC98.73 ± 1.4893.81 ± 8.4389.19 ± 12.2896.31 ± 5.295.64 ± 5.08
FPFS-3NN(P)98.22 ± 1.2992.03 ± 7.2586.67 ± 10.3893.17 ± 5.1393.87 ± 4.51
FPFS-3NN(S)98.25 ± 1.2692.35 ± 6.8187.1 ± 10.1893.23 ± 5.2793.97 ± 4.38
FPFS-3NN(K)98.25 ± 1.2692.35 ± 6.8187.1 ± 10.1893.23 ± 5.2793.97 ± 4.38
IFPIFSC98.65 ± 1.2392.79 ± 6.5389.92 ± 8.296.31 ± 3.6995.35 ± 3.38
Breast
Tissue
Fuzzy 3NN84.37 ± 2.7356.35 ± 9.7151.64 ± 8.9257.4 ± 7.0453.1 ± 8.18
FSSC87.83 ± 2.8864.48 ± 10.1461.95 ± 8.9966.11 ± 7.2763.48 ± 8.65
FussCyier87.19 ± 2.9764.15 ± 9.1160.34 ± 9.2664.79 ± 6.9261.58 ± 8.91
HDFSSC87.73 ± 367.57 ± 9.5562.07 ± 9.0864.47 ± 8.2763.2 ± 9
FPFSCC87.29 ± 2.6563.77 ± 10.1260.17 ± 8.7667.03 ± 9.2261.87 ± 7.95
FPFSNHC87.89 ± 3.2366.78 ± 10.2262.41 ± 10.4966.32 ± 7.8963.66 ± 9.69
FPFS-EC88.11 ± 2.7465.95 ± 7.9863.15 ± 8.8470.24 ± 8.5164.33 ± 8.23
FPFS-AC89.58 ± 2.6669.54 ± 8.5768.05 ± 8.3171.32 ± 7.9168.75 ± 7.98
FPFS-CMC87.82 ± 2.8666.36 ± 8.362.67 ± 8.9869.26 ± 8.663.47 ± 8.59
FPFS-3NN(P)88.61 ± 2.5165.99 ± 7.564.44 ± 8.1469.99 ± 7.8765.83 ± 7.54
FPFS-3NN(S)88.01 ± 264.35 ± 5.8262.53 ± 6.5269.23 ± 6.364.03 ± 6.01
FPFS-3NN(K)87.76 ± 2.263.65 ± 6.4461.84 ± 7.1668.6 ± 5.9763.27 ± 6.6
IFPIFSC91.39 ± 2.9175.66 ± 9.2573.18 ± 9.173.97 ± 8.6474.16 ± 8.73
Teaching
Assistant
Evaluation
Fuzzy 3NN72.06 ± 5.5359.99 ± 8.7458.06 ± 8.3657.23 ± 8.8358.08 ± 8.3
FSSC63.6 ± 4.1749.63 ± 13.3645.98 ± 6.2843.62 ± 6.2545.41 ± 6.25
FussCyier63.69 ± 4.3349.43 ± 12.1546.09 ± 6.4743.33 ± 6.5645.53 ± 6.49
HDFSSC69.37 ± 4.6655.55 ± 7.8254.2 ± 7.0753.37 ± 7.1754.06 ± 6.99
FPFSCC69.12 ± 5.8354.57 ± 9.5253.77 ± 8.7352.49 ± 9.1653.68 ± 8.75
FPFSNHC60.86 ± 4.7547.85 ± 14.6141.84 ± 7.2139.41 ± 6.3841.3 ± 7.13
FPFS-EC75.53 ± 5.4264.65 ± 9.0663.2 ± 8.2462.67 ± 8.5163.29 ± 8.13
FPFS-AC75.75 ± 4.6764.96 ± 7.663.6 ± 6.9662.9 ± 7.2963.63 ± 7.01
FPFS-CMC75.62 ± 4.7564.92 ± 7.8863.41 ± 7.0862.7 ± 7.3963.43 ± 7.12
FPFS-3NN(P)72.44 ± 5.4859.41 ± 9.1758.48 ± 8.3457.54 ± 8.6858.66 ± 8.22
FPFS-3NN(S)72.39 ± 5.0758.98 ± 8.4358.39 ± 7.757.5 ± 7.9758.58 ± 7.61
FPFS-3NN(K)72.3 ± 5.1958.86 ± 8.6458.26 ± 7.8857.37 ± 8.1958.45 ± 7.79
IFPIFSC75.65 ± 4.4864.43 ± 7.2863.31 ± 6.7862.6 ± 6.9163.47 ± 6.72
WineFuzzy 3NN82.24 ± 4.8673.79 ± 7.7972.06 ± 7.3972.22 ± 7.5473.36 ± 7.3
FSSC96.26 ± 2.3994.88 ± 3.195.3 ± 2.9994.63 ± 3.4694.38 ± 3.58
FussCyier96.44 ± 2.2194.97 ± 3.195.42 ± 2.8994.91 ± 3.1994.66 ± 3.31
HDFSSC95.36 ± 2.6693.49 ± 3.793.84 ± 3.6193.35 ± 3.8493.03 ± 3.99
FPFSCC92.43 ± 2.5389.31 ± 3.689.99 ± 3.488.89 ± 3.7988.65 ± 3.8
FPFSNHC95.54 ± 2.8293.79 ± 3.7494.41 ± 3.5393.47 ± 4.2393.31 ± 4.24
FPFS-EC97.64 ± 1.6996.59 ± 2.4297.04 ± 2.196.61 ± 2.4596.46 ± 2.53
FPFS-AC95.87 ± 3.0294.62 ± 3.4594.82 ± 3.8294.11 ± 4.4293.81 ± 4.52
FPFS-CMC97.22 ± 2.6496.15 ± 3.5196.52 ± 3.3196 ± 3.995.84 ± 3.96
FPFS-3NN(P)97.19 ± 2.1596.03 ± 2.9496.46 ± 2.7295.93 ± 3.1395.79 ± 3.22
FPFS-3NN(S)97.3 ± 2.2896.25 ± 2.9896.61 ± 2.8796.13 ± 3.2595.95 ± 3.42
FPFS-3NN(K)96.74 ± 2.5495.59 ± 3.195.91 ± 3.295.34 ± 3.5995.11 ± 3.8
IFPIFSC98.24 ± 1.7197.65 ± 2.1297.79 ± 2.1697.56 ± 2.3697.36 ± 2.57
Parkinsons[sic]Fuzzy 3NN85.38 ± 4.2581.81 ± 6.3978.34 ± 6.8979.19 ± 6.2985.38 ± 4.25
FSSC73.79 ± 6.3572.76 ± 4.1679.88 ± 5.0971.49 ± 5.9873.79 ± 6.35
FussCyier73.9 ± 6.4473.25 ± 3.9580.51 ± 4.7971.73 ± 6.0173.9 ± 6.44
HDFSSC78.21 ± 6.1675.13 ± 5.0782.04 ± 5.5775.41 ± 6.1178.21 ± 6.16
FPFSCC74.92 ± 6.1468.07 ± 8.0170.61 ± 10.0768.22 ± 8.3874.92 ± 6.14
FPFSNHC73.9 ± 6.5172.86 ± 4.379.94 ± 5.1671.58 ± 6.1573.9 ± 6.51
FPFS-EC95.85 ± 3.1594.37 ± 4.7195.15 ± 4.1294.48 ± 4.1795.85 ± 3.15
FPFS-AC92.97 ± 4.2791.04 ± 5.8190.83 ± 6.190.56 ± 5.6792.97 ± 4.27
FPFS-CMC95.03 ± 3.2992.85 ± 4.6294.67 ± 4.1793.5 ± 4.2495.03 ± 3.29
FPFS-3NN(P)94.41 ± 3.893.31 ± 5.2692.03 ± 5.2192.38 ± 5.0194.41 ± 3.8
FPFS-3NN(S)93.95 ± 3.6293.2 ± 5.1190.83 ± 5.5991.6 ± 5.0393.95 ± 3.62
FPFS-3NN(K)93.95 ± 3.6293.2 ± 5.1190.83 ± 5.5991.6 ± 5.0393.95 ± 3.62
IFPIFSC95.23 ± 3.1593.22 ± 4.5194.99 ± 4.1793.73 ± 4.1195.23 ± 3.15
SonarFuzzy 3NN82.5 ± 5.7383.3 ± 5.7782.04 ± 5.8982.15 ± 5.8982.5 ± 5.73
FSSC74.92 ± 7.575.5 ± 7.8874.44 ± 7.6274.42 ± 7.774.92 ± 7.5
FussCyier72.12 ± 5.6373.68 ± 5.8272.79 ± 5.6671.94 ± 5.7372.12 ± 5.63
HDFSSC69.38 ± 7.769.75 ± 7.9669.46 ± 7.9169.17 ± 7.8269.38 ± 7.7
FPFSCC69.22 ± 6.7769.38 ± 6.9268.95 ± 6.8468.82 ± 6.9669.22 ± 6.77
FPFSNHC71.06 ± 5.4672.63 ± 5.6371.76 ± 5.4470.87 ± 5.5771.06 ± 5.46
FPFS-EC86.57 ± 4.7987.37 ± 4.6986.22 ± 4.8886.34 ± 4.986.57 ± 4.79
FPFS-AC84.99 ± 5.1886.2 ± 4.9684.47 ± 5.3884.62 ± 5.4184.99 ± 5.18
FPFS-CMC85.53 ± 4.7886.33 ± 4.7385.22 ± 4.9285.29 ± 4.9285.53 ± 4.78
FPFS-3NN(P)86.77 ± 4.6288.1 ± 4.3586.21 ± 4.8386.42 ± 4.8786.77 ± 4.62
FPFS-3NN(S)86.19 ± 4.7787.82 ± 4.4885.56 ± 4.9785.79 ± 5.0486.19 ± 4.77
FPFS-3NN(K)86.19 ± 4.7787.82 ± 4.4885.56 ± 4.9785.79 ± 5.0486.19 ± 4.77
IFPIFSC86.88 ± 5.1587.83 ± 5.3586.47 ± 5.2586.65 ± 5.2686.88 ± 5.15
SeedsFuzzy 3NN90.32 ± 3.4487.35 ± 4.4485.48 ± 5.1685.36 ± 5.485.48 ± 5.16
FSSC94.1 ± 2.0891.54 ± 2.9691.14 ± 3.1291.08 ± 3.1891.14 ± 3.12
FussCyier94.13 ± 2.2391.63 ± 3.1491.19 ± 3.3491.15 ± 3.3791.19 ± 3.34
HDFSSC93.17 ± 2.1390.34 ± 3.1189.76 ± 3.289.76 ± 3.1989.76 ± 3.2
FPFSCC90.48 ± 3.3286.35 ± 4.9185.71 ± 4.9885.68 ± 5.0285.71 ± 4.98
FPFSNHC93.52 ± 2.4690.92 ± 3.4390.29 ± 3.6990.28 ± 3.7190.29 ± 3.69
FPFS-EC93.14 ± 2.5990.18 ± 3.9889.71 ± 3.8989.58 ± 489.71 ± 3.89
FPFS-AC93.49 ± 2.5990.71 ± 3.990.24 ± 3.8990.11 ± 3.9590.24 ± 3.89
FPFS-CMC93.05 ± 2.7490.02 ± 4.0389.57 ± 4.1189.45 ± 4.1989.57 ± 4.11
FPFS-3NN(P)92.86 ± 2.3889.82 ± 3.589.29 ± 3.5889.23 ± 3.6189.29 ± 3.58
FPFS-3NN(S)93.02 ± 2.6690.06 ± 3.9489.52 ± 489.46 ± 4.0389.52 ± 4
FPFS-3NN(K)92.79 ± 2.5189.77 ± 3.7389.19 ± 3.7689.14 ± 3.7889.19 ± 3.76
IFPIFSC95.49 ± 2.1193.59 ± 3.0793.24 ± 3.1793.19 ± 3.2593.24 ± 3.17
Parkinson
Acoustic
Fuzzy 3NN75.96 ± 5.9476.71 ± 5.9875.96 ± 5.9475.78 ± 6.0175.96 ± 5.94
FSSC79.75 ± 5.6980.34 ± 5.5679.75 ± 5.6979.63 ± 5.7779.75 ± 5.69
FussCyier80 ± 5.7980.5 ± 5.7180 ± 5.7979.9 ± 5.8580 ± 5.79
HDFSSC82.58 ± 4.7983.03 ± 4.6582.58 ± 4.7982.51 ± 4.8582.58 ± 4.79
FPFSCC79.96 ± 5.0880.73 ± 5.1679.96 ± 5.0879.83 ± 5.1279.96 ± 5.08
FPFSNHC79.08 ± 5.5779.63 ± 5.5179.08 ± 5.5778.97 ± 5.6279.08 ± 5.57
FPFS-EC75.71 ± 7.0576.05 ± 7.0975.71 ± 7.0575.62 ± 7.0775.71 ± 7.05
FPFS-AC80.67 ± 5.6381.23 ± 5.6680.67 ± 5.6380.58 ± 5.6680.67 ± 5.63
FPFS-CMC75.79 ± 6.7576.14 ± 6.8975.79 ± 6.7575.72 ± 6.7675.79 ± 6.75
FPFS-3NN(P)80.38 ± 5.3380.98 ± 5.2880.38 ± 5.3380.26 ± 5.480.38 ± 5.33
FPFS-3NN(S)79.79 ± 5.680.41 ± 5.5179.79 ± 5.679.67 ± 5.6979.79 ± 5.6
FPFS-3NN(K)80.46 ± 5.5381.12 ± 5.4780.46 ± 5.5380.34 ± 5.6180.46 ± 5.53
IFPIFSC82.54 ± 5.4482.97 ± 5.3982.54 ± 5.4482.48 ± 5.4882.54 ± 5.44
EcoliFuzzy 3NN92.08 ± 1.2253.87 ± 3.9460.13 ± 6.2464.95 ± 5.8568.34 ± 4.89
FSSC94.73 ± 1.3170.9 ± 7.7474.61 ± 4.4681.39 ± 5.0580.69 ± 4.41
FussCyier95.23 ± 1.1973.87 ± 7.475.16 ± 4.7382.21 ± 5.0382.59 ± 4.08
HDFSSC94.99 ± 1.169.08 ± 674.43 ± 4.6381.44 ± 4.481.41 ± 3.85
FPFSCC88.74 ± 1.7847.56 ± 8.8451.08 ± 8.3156.28 ± 6.857.89 ± 5.7
FPFSNHC93.64 ± 1.3964 ± 7.6566.75 ± 7.7674.49 ± 6.3176.13 ± 4.98
FPFS-EC94.08 ± 1.2868.97 ± 11.1765.21 ± 8.0274.07 ± 6.978.66 ± 4.75
FPFS-AC94.1 ± 1.1272.12 ± 8.367.66 ± 6.7174.88 ± 4.7179.04 ± 4.06
FPFS-CMC93.94 ± 1.1467.75 ± 9.764.38 ± 6.8972.69 ± 5.2478.18 ± 4.14
FPFS-3NN(P)94.49 ± 1.0374.72 ± 8.6565.59 ± 6.4174.75 ± 5.5781.31 ± 3.45
FPFS-3NN(S)95.18 ± 1.0178.06 ± 7.570.1 ± 6.7578.82 ± 5.383.66 ± 3.43
FPFS-3NN(K)95.26 ± 177.83 ± 7.4370.88 ± 6.8778.46 ± 5.6883.93 ± 3.34
IFPIFSC94.8 ± 1.0677.54 ± 7.771.43 ± 5.6779.18 ± 4.7681.73 ± 3.65
LeafFuzzy 3NN96.14 ± 0.2331.16 ± 4.6931.18 ± 3.9561.27 ± 4.0331.94 ± 4.05
FSSC97.43 ± 0.3466.6 ± 5.9261.82 ± 5.2270.9 ± 3.8561.5 ± 5.13
FussCyier97.46 ± 0.3566.76 ± 5.8262.26 ± 5.2371.58 ± 3.6661.97 ± 5.21
HDFSSC97.6 ± 0.3268.65 ± 5.4964.47 ± 5.0172.52 ± 3.5163.97 ± 4.77
FPFSCC96.95 ± 0.3259.05 ± 5.8954.58 ± 5.0767.86 ± 4.4254.26 ± 4.75
FPFSNHC97.46 ± 0.366.45 ± 5.1262.43 ± 4.672.58 ± 3.2761.97 ± 4.52
FPFS-EC97.8 ± 0.371.26 ± 5.9767.11 ± 5.0474.37 ± 3.367.06 ± 4.54
FPFS-AC97.85 ± 0.2872.46 ± 4.2667.86 ± 4.5674.59 ± 3.4367.74 ± 4.27
FPFS-CMC97.74 ± 0.2870.79 ± 4.4966.38 ± 4.5973.41 ± 3.5966.15 ± 4.21
FPFS-3NN(P)97.78 ± 0.2871.74 ± 4.5266.47 ± 3.974.31 ± 4.1166.65 ± 4.13
FPFS-3NN(S)97.94 ± 0.374.14 ± 4.7268.83 ± 4.4675.74 ± 4.1569.12 ± 4.56
FPFS-3NN(K)97.92 ± 0.3174.32 ± 4.8368.6 ± 4.4375.16 ± 4.0468.82 ± 4.62
IFPIFSC98.15 ± 0.2676.88 ± 4.0972.17 ± 3.9576.88 ± 3.1172.24 ± 3.87
IonosphereFuzzy 3NN84.99 ± 3.6189.17 ± 3.1179.57 ± 4.8681.66 ± 4.9884.99 ± 3.61
FSSC64.1 ± 0.3764.1 ± 0.3750 ± 078.13 ± 0.2764.1 ± 0.37
FussCyier64.1 ± 0.3764.1 ± 0.3750 ± 078.13 ± 0.2764.1 ± 0.37
HDFSSC64.1 ± 0.3764.1 ± 0.3750 ± 078.13 ± 0.2764.1 ± 0.37
FPFSCC84.88 ± 6.1784.51 ± 6.7283.52 ± 5.7983.58 ± 6.3684.88 ± 6.17
FPFSNHC82.6 ± 4.1783.27 ± 5.0878.43 ± 4.9479.76 ± 5.182.6 ± 4.17
FPFS-EC89.55 ± 3.6591.98 ± 2.9185.94 ± 4.9787.73 ± 4.7189.55 ± 3.65
FPFS-AC88.81 ± 3.591.82 ± 2.6384.79 ± 4.7786.76 ± 4.5288.81 ± 3.5
FPFS-CMC89.12 ± 2.9191.59 ± 2.4885.44 ± 3.9887.28 ± 3.6989.12 ± 2.91
FPFS-3NN(P)87.81 ± 2.8491.11 ± 2.483.42 ± 3.8385.51 ± 3.6687.81 ± 2.84
FPFS-3NN(S)87.78 ± 3.1190.9 ± 3.0283.47 ± 4.0185.53 ± 3.987.78 ± 3.11
FPFS-3NN(K)87.87 ± 3.0991.03 ± 2.8883.55 ± 4.0485.62 ± 3.9187.87 ± 3.09
IFPIFSC91.14 ± 2.9191.26 ± 3.4389.54 ± 3.2590.19 ± 3.2291.14 ± 2.91
Libras
Movement
Fuzzy 3NN95.9 ± 0.5573.7 ± 3.8369.23 ± 4.0669.07 ± 4.0769.22 ± 4.13
FSSC93.13 ± 0.7554.48 ± 5.5948.39 ± 5.6852.25 ± 5.5248.44 ± 5.62
FussCyier93.39 ± 0.7255.52 ± 5.7450.39 ± 5.5853.84 ± 4.9350.42 ± 5.43
HDFSSC93.94 ± 0.7259.18 ± 5.9854.49 ± 5.5158.01 ± 4.7454.58 ± 5.41
FPFSCC93.17 ± 0.7553.71 ± 5.9648.71 ± 5.752.09 ± 5.1548.81 ± 5.66
FPFSNHC93.15 ± 0.853.32 ± 6.0548.64 ± 653 ± 5.4948.64 ± 5.99
FPFS-EC97.01 ± 0.5680.44 ± 4.6277.59 ± 4.1877.63 ± 4.1777.56 ± 4.2
FPFS-AC97.33 ± 0.5282.59 ± 3.8380.09 ± 3.7879.78 ± 3.6179.94 ± 3.87
FPFS-CMC96.95 ± 0.5979.7 ± 4.5177.27 ± 4.2677.64 ± 4.3577.14 ± 4.4
FPFS-3NN(P)96.85 ± 0.5980.47 ± 4.1376.42 ± 4.476.22 ± 4.2176.39 ± 4.44
FPFS-3NN(S)96.74 ± 0.679.61 ± 3.7975.55 ± 4.4375.26 ± 4.2475.56 ± 4.5
FPFS-3NN(K)96.75 ± 0.6279.67 ± 3.9675.62 ± 4.5775.31 ± 4.3775.61 ± 4.65
IFPIFSC97.89 ± 0.4686.55 ± 3.1684.21 ± 3.5383.65 ± 3.5984.17 ± 3.43
DermatologyFuzzy 3NN91.22 ± 1.277.95 ± 3.6671.9 ± 4.7172.01 ± 4.4573.66 ± 3.6
FSSC99.15 ± 0.5597.36 ± 1.7597.14 ± 1.8897.13 ± 1.8697.46 ± 1.65
FussCyier98.62 ± 0.8195.82 ± 2.3296.27 ± 2.1195.78 ± 2.4195.85 ± 2.44
HDFSSC98.87 ± 0.7296.51 ± 2.296.5 ± 2.1696.31 ± 2.2896.61 ± 2.16
FPFSCC93.85 ± 1.3383.13 ± 3.8682.69 ± 3.7381.68 ± 3.8881.56 ± 3.99
FPFSNHC97.75 ± 0.9693.65 ± 2.5293.77 ± 2.7293.08 ± 2.9593.25 ± 2.88
FPFS-EC98.03 ± 0.7794.21 ± 2.1993.98 ± 2.493.69 ± 2.4194.1 ± 2.31
FPFS-AC98.83 ± 0.7896.53 ± 2.3396.23 ± 2.596.23 ± 2.596.5 ± 2.33
FPFS-CMC97.66 ± 0.8192.75 ± 2.692.65 ± 2.7492.42 ± 2.6692.98 ± 2.43
FPFS-3NN(P)97.4 ± 0.8892.31 ± 2.5791.98 ± 2.7691.76 ± 2.892.21 ± 2.65
FPFS-3NN(S)98.31 ± 0.7294.78 ± 2.394.65 ± 2.3794.46 ± 2.3894.94 ± 2.17
FPFS-3NN(K)98.24 ± 0.7694.66 ± 2.394.5 ± 2.3894.28 ± 2.4494.72 ± 2.27
IFPIFSC99.01 ± 0.7296.93 ± 2.3796.72 ± 2.396.67 ± 2.4197.02 ± 2.15
Breast
Cancer
Wisconsin
Fuzzy 3NN92.02 ± 2.191.97 ± 2.2490.96 ± 2.3991.36 ± 2.2992.02 ± 2.1
FSSC93.64 ± 2.3393.4 ± 2.4993.03 ± 2.6493.16 ± 2.5293.64 ± 2.33
FussCyier93.53 ± 2.394.3 ± 2.1891.98 ± 2.8892.88 ± 2.5893.53 ± 2.3
HDFSSC92.85 ± 2.2793 ± 2.2991.69 ± 2.7992.22 ± 2.5292.85 ± 2.27
FPFSCC93.34 ± 1.993.09 ± 2.0992.73 ± 2.1292.85 ± 2.0493.34 ± 1.9
FPFSNHC93.81 ± 2.2594.69 ± 2.1192.22 ± 2.7993.19 ± 2.5293.81 ± 2.25
FPFS-EC95.27 ± 1.6595.09 ± 1.9494.88 ± 1.6894.94 ± 1.7595.27 ± 1.65
FPFS-AC95.08 ± 1.5894.85 ± 1.7994.76 ± 1.7494.74 ± 1.6895.08 ± 1.58
FPFS-CMC95.03 ± 1.7494.84 ± 1.994.62 ± 1.9494.67 ± 1.8695.03 ± 1.74
FPFS-3NN(P)96.63 ± 1.4396.75 ± 1.696.07 ± 1.5996.37 ± 1.5496.63 ± 1.43
FPFS-3NN(S)96.54 ± 1.5296.68 ± 1.6995.96 ± 1.6896.27 ± 1.6396.54 ± 1.52
FPFS-3NN(K)96.54 ± 1.5296.68 ± 1.6995.96 ± 1.6896.27 ± 1.6396.54 ± 1.52
IFPIFSC95.69 ± 1.4395.57 ± 1.5995.28 ± 1.695.38 ± 1.5495.69 ± 1.43
HCV
Data
Fuzzy 3NN97.17 ± 0.5354.58 ± 11.2448.12 ± 12.3667.13 ± 10.3392.94 ± 1.31
FSSC97.29 ± 0.6264.38 ± 8.6863.6 ± 11.4769.32 ± 7.9193.23 ± 1.55
FussCyier97.32 ± 0.6165.17 ± 9.4762.55 ± 11.369.64 ± 8.8493.31 ± 1.52
HDFSSC96.73 ± 0.9662.71 ± 8.6764.74 ± 11.1467.65 ± 6.8791.82 ± 2.41
FPFSCC95.95 ± 0.9951.7 ± 13.0350.43 ± 11.2465.59 ± 10.1589.88 ± 2.48
FPFSNHC97.15 ± 0.6463.69 ± 12.4454.98 ± 1168.58 ± 6.6892.87 ± 1.61
FPFS-EC97.11 ± 0.5760.45 ± 14.6447.08 ± 1082.26 ± 10.9892.78 ± 1.42
FPFS-AC97.97 ± 0.5873.93 ± 14.5155.96 ± 10.776.71 ± 10.0294.92 ± 1.45
FPFS-CMC97.04 ± 0.5563.74 ± 13.6948.65 ± 10.2276.46 ± 10.792.6 ± 1.38
FPFS-3NN(P)97 ± 0.3356.97 ± 9.9538.03 ± 5.6584.43 ± 9.492.51 ± 0.84
FPFS-3NN(S)97.3 ± 0.4167.66 ± 12.1243.88 ± 7.7680.49 ± 9.4893.26 ± 1.04
FPFS-3NN(K)97.3 ± 0.4167.22 ± 11.8843.88 ± 7.7680.4 ± 9.5493.26 ± 1.04
IFPIFSC97.92 ± 0.5270.56 ± 10.857.48 ± 12.0374.69 ± 7.5594.81 ± 1.29
Parkinson’s
Disease
Classification
Fuzzy 3NN71.27 ± 3.1961.36 ± 4.2860.41 ± 3.7660.68 ± 3.9371.27 ± 3.19
FSSC38.3 ± 747.68 ± 4.7848 ± 4.8737.76 ± 6.6338.3 ± 7
FussCyier62.3 ± 16.0847.44 ± 6.0349.01 ± 2.0944.4 ± 11.9562.3 ± 16.08
HDFSSC62.52 ± 15.9647.31 ± 6.7349.01 ± 2.2245.17 ± 13.0262.52 ± 15.96
FPFSCC74.56 ± 3.969.04 ± 4.172.65 ± 4.769.79 ± 4.3574.56 ± 3.9
FPFSNHC73.79 ± 2.8467.85 ± 3.1970.99 ± 4.0968.52 ± 3.3673.79 ± 2.84
FPFS-EC94.1 ± 2.3792.32 ± 3.2892.24 ± 3.2892.22 ± 3.1294.1 ± 2.37
FPFS-AC93.63 ± 1.8891.87 ± 2.7691.38 ± 2.6691.55 ± 2.4693.63 ± 1.88
FPFS-CMC90.9 ± 2.3288.37 ± 3.4487.72 ± 2.8987.94 ± 2.9490.9 ± 2.32
FPFS-3NN(P)92.39 ± 1.9391.11 ± 2.5988.57 ± 3.489.62 ± 2.7992.39 ± 1.93
FPFS-3NN(S)91.67 ± 1.8889.89 ± 2.5487.84 ± 3.388.69 ± 2.7391.67 ± 1.88
FPFS-3NN(K)91.67 ± 1.8589.96 ± 2.4487.74 ± 3.3688.66 ± 2.7391.67 ± 1.85
IFPIFSC94.95 ± 1.5693.76 ± 2.1892.96 ± 2.5693.27 ± 2.1294.95 ± 1.56
Mice
Protein
Expression
Fuzzy 3NN99.89 ± 0.1299.58 ± 0.4399.56 ± 0.4799.56 ± 0.4699.55 ± 0.47
FSSC98.67 ± 0.4895.01 ± 1.894.9 ± 1.8694.83 ± 1.8894.69 ± 1.92
FussCyier98.75 ± 0.4895.33 ± 1.7895.22 ± 1.8595.14 ± 1.8894.99 ± 1.9
HDFSSC100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
FPFSCC99.98 ± 0.0599.91 ± 0.1899.91 ± 0.1999.91 ± 0.1999.91 ± 0.19
FPFSNHC99.98 ± 0.0599.93 ± 0.1699.93 ± 0.1699.92 ± 0.1699.92 ± 0.18
FPFS-EC100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
FPFS-AC100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
FPFS-CMC100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
FPFS-3NN(P)100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
FPFS-3NN(S)100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
FPFS-3NN(K)100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
IFPIFSC100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
Semeion
Handwritten
Digit
Fuzzy 3NN97.23 ± 0.7197.67 ± 1.4786.67 ± 3.5691.16 ± 2.6697.23 ± 0.71
FSSC44.16 ± 2.9657.54 ± 0.3668.98 ± 1.6640.62 ± 2.2344.16 ± 2.96
FussCyier76.2 ± 2.6564.06 ± 1.3384.36 ± 2.1964.53 ± 2.4176.2 ± 2.65
HDFSSC89.45 ± 1.7573.53 ± 2.5288.22 ± 2.9178 ± 2.789.45 ± 1.75
FPFSCC66.56 ± 7.6260.04 ± 2.8375.92 ± 4.9856.13 ± 6.0366.56 ± 7.62
FPFSNHC80.18 ± 2.3465.7 ± 1.6685.25 ± 2.867.85 ± 2.5180.18 ± 2.34
FPFS-EC96.65 ± 0.992.33 ± 2.9888.4 ± 3.7890.11 ± 2.8696.65 ± 0.9
FPFS-AC95.2 ± 1.3788.24 ± 4.4883.75 ± 4.7285.68 ± 4.2495.2 ± 1.37
FPFS-CMC94.46 ± 1.1585.4 ± 3.7882.85 ± 3.9283.93 ± 3.4294.46 ± 1.15
FPFS-3NN(P)96.62 ± 0.7794.26 ± 2.4586.1 ± 3.5289.53 ± 2.6296.62 ± 0.77
FPFS-3NN(S)96.62 ± 0.7794.26 ± 2.4586.1 ± 3.5289.53 ± 2.6296.62 ± 0.77
FPFS-3NN(K)96.62 ± 0.7794.26 ± 2.4586.1 ± 3.5289.53 ± 2.6296.62 ± 0.77
IFPIFSC98.14 ± 0.7597.32 ± 1.8592.16 ± 3.4794.42 ± 2.4398.14 ± 0.75
Car
Evaluation
Fuzzy 3NN94.43 ± 0.7179.11 ± 2.8462.39 ± 4.6166.95 ± 4.8988.86 ± 1.41
FSSC72.2 ± 1.0438.09 ± 1.4857.24 ± 3.4334.39 ± 1.8344.39 ± 2.08
FussCyier80.38 ± 1.0844.49 ± 1.8365.07 ± 3.5245.43 ± 2.2260.76 ± 2.16
HDFSSC86.66 ± 1.0555.65 ± 2.4576.71 ± 4.1560.53 ± 3.0573.32 ± 2.09
FPFSCC84.99 ± 1.4358.65 ± 4.5675.17 ± 4.8262.41 ± 5.0169.98 ± 2.87
FPFSNHC79.61 ± 1.0642.66 ± 2.2163.24 ± 3.3543.42 ± 2.7859.21 ± 2.11
FPFS-EC97.46 ± 0.5490.01 ± 2.5689.04 ± 3.8289.25 ± 2.9694.91 ± 1.07
FPFS-AC97.79 ± 0.5690.51 ± 3.2392.62 ± 2.6491.24 ± 2.8695.57 ± 1.13
FPFS-CMC97.42 ± 0.6289.93 ± 3.0188.59 ± 3.5788.88 ± 2.9594.85 ± 1.24
FPFS-3NN(P)97.7 ± 0.6989.07 ± 3.890.77 ± 3.7689.62 ± 3.795.41 ± 1.38
FPFS-3NN(S)97.77 ± 0.6489.4 ± 3.5591.12 ± 3.4789.99 ± 3.3995.54 ± 1.28
FPFS-3NN(K)97.75 ± 0.6589.39 ± 3.6191.03 ± 3.4589.93 ± 3.4295.49 ± 1.29
IFPIFSC98.03 ± 0.4291.27 ± 3.0390.41 ± 3.1990.59 ± 2.6496.06 ± 0.85
Wireless
Indoor
Localization
Fuzzy 3NN99.13 ± 0.2898.29 ± 0.5598.26 ± 0.5698.26 ± 0.5698.26 ± 0.56
FSSC97.5 ± 0.4295.42 ± 0.7195 ± 0.8394.99 ± 0.8495 ± 0.83
FussCyier97.62 ± 0.495.64 ± 0.6895.24 ± 0.895.24 ± 0.895.24 ± 0.8
HDFSSC96.73 ± 0.5793.9 ± 1.0493.46 ± 1.1593.46 ± 1.1593.46 ± 1.15
FPFSCC91.39 ± 0.8883.12 ± 1.7382.79 ± 1.7682.61 ± 1.7882.79 ± 1.76
FPFSNHC94.64 ± 0.7589.79 ± 1.3689.27 ± 1.589.33 ± 1.4889.27 ± 1.5
FPFS-EC94.86 ± 0.7989.83 ± 1.5789.73 ± 1.5889.73 ± 1.5889.73 ± 1.58
FPFS-AC95.63 ± 0.5991.4 ± 1.1891.26 ± 1.1991.26 ± 1.1991.26 ± 1.19
FPFS-CMC94.54 ± 0.6989.22 ± 1.3789.09 ± 1.3989.1 ± 1.3889.09 ± 1.39
FPFS-3NN(P)95.27 ± 0.7390.71 ± 1.4390.54 ± 1.4690.57 ± 1.4590.54 ± 1.46
FPFS-3NN(S)95.05 ± 0.7390.28 ± 1.4190.11 ± 1.4690.14 ± 1.4490.11 ± 1.46
FPFS-3NN(K)96.32 ± 0.6792.8 ± 1.2992.64 ± 1.3392.67 ± 1.3292.64 ± 1.33
IFPIFSC99.15 ± 0.2498.32 ± 0.4798.3 ± 0.4898.3 ± 0.4898.3 ± 0.48
Mean
Performance
Results
Fuzzy 3NN89.1 ± 2.4275.91 ± 4.9272.31 ± 5.5176.27 ± 5.0678.7 ± 3.99
FSSC82.93 ± 2.5373.21 ± 4.973.38 ± 4.6672.95 ± 4.2573.58 ± 4.08
FussCyier86.01 ± 2.973.98 ± 4.8674.51 ± 4.574.96 ± 4.4977.12 ± 4.48
HDFSSC87.43 ± 2.9175.51 ± 4.6976.26 ± 4.6977.25 ± 4.5579.42 ± 4.44
FPFSCC86.25 ± 3.0872.2 ± 5.9173.07 ± 5.9473.55 ± 5.5875.43 ± 4.9
FPFSNHC87.2 ± 2.4975.07 ± 5.2875.64 ± 5.2175.39 ± 4.477.92 ± 4.13
FPFS-EC93.17 ± 2.184.82 ± 5.0482.56 ± 4.9185.91 ± 4.4386.92 ± 3.5
FPFS-AC93.2 ± 2.185.81 ± 4.8783.25 ± 4.8585.63 ± 4.3587.36 ± 3.48
FPFS-CMC92.68 ± 2.184.03 ± 4.9781.73 ± 4.984.63 ± 4.486.24 ± 3.55
FPFS-3NN(P)93.04 ± 1.9584.75 ± 4.4781.39 ± 4.4685.38 ± 4.2886.67 ± 3.31
FPFS-3NN(S)92.99 ± 1.9585.45 ± 4.4181.9 ± 4.5385.38 ± 4.1986.84 ± 3.26
FPFS-3NN(K)93.03 ± 1.9685.51 ± 4.4381.98 ± 4.5885.38 ± 4.2186.89 ± 3.3
IFPIFSC94.45 ± 1.8388.21 ± 4.2186.11 ± 4.3187.98 ± 3.6889.62 ± 3.03
Acc, Pre, Rec, MacF, and MicF results and their standard deviations (SD) are presented in percentage. The best performance results are shown in bold.
Table 3. Ranking numbers of the best results for all fuzzy-based classifiers.
Table 3. Ranking numbers of the best results for all fuzzy-based classifiers.
ClassifiersAccPreRecMacFMicFTotal Rank
Fuzzy 3NN0/201/200/200/200/201/100
FSSC1/201/201/201/201/205/100
FussCyier0/200/201/201/200/202/100
HDFSSC2/202/203/202/202/2011/100
FPFSCC0/200/200/200/200/200/100
FPFSNHC0/200/200/200/200/200/100
FPFS-EC3/204/202/203/203/2015/100
FPFS-AC3/203/203/203/203/2015/100
FPFS-CMC1/201/201/201/201/205/100
FPFS-3NN(P)2/203/202/203/202/2012/100
FPFS-3NN(S)1/202/201/201/201/206/100
FPFS-3NN(K)2/201/201/201/202/207/100
IFPIFSC12/209/2012/2011/2012/2056/100
Table 4. Ranking numbers of the best results of IFPIFSC over the others.
Table 4. Ranking numbers of the best results of IFPIFSC over the others.
ClassifiersAccPreRecMacFMicF
IFPIFSC versus Fuzzy 3NN2019202020
IFPIFSC versus FSSC1919171819
IFPIFSC versus FussCyier1920181920
IFPIFSC versus HDFSSC1819171819
IFPIFSC versus FPFSCC2020202020
IFPIFSC versus FPFSNHC2020202020
IFPIFSC versus FPFS-EC1816191617
IFPIFSC versus FPFS-AC1817181718
IFPIFSC versus FPFS-CMC1917191919
IFPIFSC versus FPFS-3NN(P)1917181819
IFPIFSC versus FPFS-3NN(S)1818181818
IFPIFSC versus FPFS-3NN(K)1818181818
Table 5. Simulation results of the non-fuzzy-based classifiers.
Table 5. Simulation results of the non-fuzzy-based classifiers.
DatasetsClassifiersAcc ± SDPre ± SDRec ± SDMacF ± SDMicF ± SD
ZooSVM98.51 ± 1.1492.64 ± 6.189.79 ± 7.594.88 ± 4.5594.84 ± 3.99
DT96.97 ± 1.1983.19 ± 8.8276.02 ± 9.6187.71 ± 4.3389.6 ± 4.18
BT82.45 ± 1.2840.57 ± 1.1514.76 ± 0.9657.71 ± 1.1540.57 ± 1.15
RF98.9 ± 1.0294.96 ± 6.4290.36 ± 9.4196.58 ± 496.24 ± 3.54
AdaBoost82.45 ± 1.2840.57 ± 1.1514.76 ± 0.9657.71 ± 1.1540.57 ± 1.15
IFPIFSC98.67 ± 1.0393.29 ± 6.4389.38 ± 8.1595.44 ± 4.5795.44 ± 3.59
Breast
Tissue
SVM89.03 ± 3.5669.42 ± 11.5366.12 ± 11.0768.75 ± 9.8667.1 ± 10.68
DT88.8 ± 2.9869.31 ± 9.8265.28 ± 9.3869.12 ± 8.2866.41 ± 8.94
BT89.75 ± 2.9271.85 ± 9.6268.49 ± 8.970.29 ± 7.0169.26 ± 8.75
RF89.81 ± 3.1670.99 ± 11.2568.13 ± 9.5872.02 ± 7.2369.42 ± 9.48
AdaBoost89.75 ± 2.9271.85 ± 9.6268.49 ± 8.970.29 ± 7.0169.26 ± 8.75
IFPIFSC90.97 ± 2.2274.85 ± 8.1271.96 ± 7.3673.15 ± 6.8572.91 ± 6.67
Teaching
Assistant
Evaluation
SVM68.03 ± 5.1753.94 ± 8.3452.2 ± 7.7951.88 ± 7.5952.05 ± 7.76
DT69.76 ± 6.0855.26 ± 9.7454.57 ± 9.1253.91 ± 9.3654.65 ± 9.12
BT70.5 ± 5.7356.88 ± 9.6155.73 ± 8.6355.36 ± 8.7555.75 ± 8.59
RF74.56 ± 4.6862.81 ± 7.8761.73 ± 7.0861.16 ± 7.3261.85 ± 7.01
AdaBoost70.5 ± 5.7356.88 ± 9.6155.73 ± 8.6355.36 ± 8.7555.75 ± 8.59
IFPIFSC74.62 ± 562.75 ± 8.1261.79 ± 7.661.34 ± 7.8561.94 ± 7.5
WineSVM96.78 ± 2.2795.45 ± 3.0695.43 ± 3.1295.19 ± 3.3395.16 ± 3.4
DT93.69 ± 3.4891.08 ± 5.2190.91 ± 4.9290.59 ± 5.2790.54 ± 5.22
BT61.17 ± 6.3441.79 ± 9.6435.54 ± 10.9358.21 ± 6.2441.76 ± 9.51
RF98.68 ± 1.4498.02 ± 2.1898.29 ± 1.8598.06 ± 2.1198.03 ± 2.16
AdaBoost61.17 ± 6.3441.79 ± 9.6435.54 ± 10.9358.21 ± 6.2441.76 ± 9.51
IFPIFSC97.98 ± 2.0597.37 ± 2.3997.45 ± 2.6297.2 ± 2.8896.97 ± 3.08
Parkinsons[sic]SVM86.67 ± 3.0687.29 ± 6.2375.98 ± 5.4279.04 ± 5.4286.67 ± 3.06
DT86.67 ± 5.3882.75 ± 6.9883.58 ± 6.6482.57 ± 6.5186.67 ± 5.38
BT89.23 ± 5.5686.83 ± 7.6284.09 ± 8.1784.87 ± 7.8589.23 ± 5.56
RF90.67 ± 3.7690.23 ± 5.5484.64 ± 6.4486.45 ± 5.5790.67 ± 3.76
AdaBoost88.87 ± 6.8587.64 ± 8.0981.6 ± 13.9487.43 ± 5.4288.87 ± 6.85
IFPIFSC94.67 ± 3.9792.72 ± 5.2993.76 ± 5.5992.95 ± 5.1394.67 ± 3.97
SonarSVM76.2 ± 6.5177.01 ± 6.7775.69 ± 6.5475.68 ± 6.776.2 ± 6.51
DT71.87 ± 6.8572.33 ± 6.9471.67 ± 6.8371.51 ± 6.9271.87 ± 6.85
BT85.04 ± 5.9185.65 ± 684.75 ± 5.9484.84 ± 5.9785.04 ± 5.91
RF83.7 ± 684.73 ± 5.8283.34 ± 6.183.37 ± 6.283.7 ± 6
AdaBoost84.37 ± 5.1685.05 ± 5.1883.99 ± 5.2284.12 ± 5.2684.37 ± 5.16
IFPIFSC87.45 ± 5.1388.26 ± 5.0187.04 ± 5.2387.21 ± 5.2787.45 ± 5.13
SeedsSVM94.44 ± 2.4192.12 ± 3.5591.67 ± 3.6191.58 ± 3.6791.67 ± 3.61
DT94.29 ± 2.5792.03 ± 3.6991.43 ± 3.8591.38 ± 3.9191.43 ± 3.85
BT88.7 ± 14.8183.41 ± 22.3683.05 ± 22.2185.64 ± 16.1583.05 ± 22.21
RF95.27 ± 2.2893.39 ± 3.2592.9 ± 3.4292.87 ± 3.4492.9 ± 3.42
AdaBoost88.7 ± 14.8183.41 ± 22.3683.05 ± 22.2185.64 ± 16.1583.05 ± 22.21
IFPIFSC95.68 ± 2.4494.02 ± 3.3993.52 ± 3.6693.48 ± 3.7193.52 ± 3.66
Parkinson
Acoustic
SVM80.17 ± 6.1280.85 ± 5.9880.17 ± 6.1280.03 ± 6.2380.17 ± 6.12
DT72.54 ± 5.9573.1 ± 6.1772.54 ± 5.9572.38 ± 5.9872.54 ± 5.95
BT80.29 ± 5.4681.03 ± 5.4280.29 ± 5.4680.16 ± 5.5280.29 ± 5.46
RF80.46 ± 5.3981.13 ± 5.4880.46 ± 5.3980.35 ± 5.4380.46 ± 5.39
AdaBoost81.54 ± 5.7682.21 ± 5.7281.54 ± 5.7681.43 ± 5.8181.54 ± 5.76
IFPIFSC81.88 ± 4.6782.32 ± 4.6281.88 ± 4.6781.81 ± 4.7281.88 ± 4.67
EcoliSVM93.91 ± 0.7878.32 ± 9.351.95 ± 9.575.99 ± 6.3979.29 ± 2.95
DT94.43 ± 1.1671.31 ± 8.9257.72 ± 7.9875.75 ± 5.7780.69 ± 4.34
BT95.28 ± 1.0978.54 ± 9.767.64 ± 9.9480.4 ± 5.4783.49 ± 4.16
RF95.8 ± 0.9784.53 ± 5.5571.05 ± 9.0683.45 ± 4.3485.69 ± 3.53
AdaBoost95.28 ± 1.0978.54 ± 9.767.64 ± 9.9480.4 ± 5.4783.49 ± 4.16
IFPIFSC94.85 ± 1.0177.57 ± 7.8671.34 ± 6.6679.43 ± 4.8281.82 ± 3.81
LeafSVM96.96 ± 0.2962.81 ± 5.1553.46 ± 4.0868.93 ± 4.3254.47 ± 4.35
DT97.44 ± 0.3666.56 ± 6.5861.31 ± 5.5470.92 ± 3.9461.65 ± 5.4
BT97.84 ± 0.3873.3 ± 6.1567.39 ± 5.974.64 ± 4.2567.62 ± 5.63
RF98.4 ± 0.3580.12 ± 5.2475.43 ± 5.2580.56 ± 3.9475.94 ± 5.25
AdaBoost97.84 ± 0.3873.3 ± 6.1567.39 ± 5.974.64 ± 4.2567.62 ± 5.63
IFPIFSC98.11 ± 0.3176.44 ± 4.8471.4 ± 4.775.83 ± 3.9571.59 ± 4.69
IonosphereSVM87.18 ± 2.8589.02 ± 2.8783.44 ± 3.8385.08 ± 3.6187.18 ± 2.85
DT88.58 ± 3.3287.84 ± 3.787.75 ± 3.6987.61 ± 3.688.58 ± 3.32
BT93.93 ± 2.794.57 ± 2.6492.32 ± 3.4193.21 ± 3.193.93 ± 2.7
RF93.3 ± 2.793.55 ± 2.991.98 ± 3.2492.58 ± 3.0393.3 ± 2.7
AdaBoost93.25 ± 2.3994.01 ± 2.4391.43 ± 3.0392.43 ± 2.7593.25 ± 2.39
IFPIFSC91.43 ± 2.5691.6 ± 2.6989.87 ± 3.490.47 ± 2.9591.43 ± 2.56
Libras
Movement
SVM95.86 ± 0.6373.58 ± 4.5868.99 ± 4.6168.76 ± 4.8368.97 ± 4.73
DT94.92 ± 0.8865.79 ± 6.9361.87 ± 6.6863.13 ± 6.0961.89 ± 6.62
BT96.09 ± 0.6374.11 ± 4.3270.63 ± 4.870.94 ± 5.0870.64 ± 4.72
RF97.45 ± 0.5783.09 ± 3.9280.95 ± 4.2180.78 ± 4.4580.86 ± 4.29
AdaBoost96.09 ± 0.6374.11 ± 4.3270.63 ± 4.870.94 ± 5.0870.64 ± 4.72
IFPIFSC97.93 ± 0.586.88 ± 3.5684.55 ± 3.7883.9 ± 4.0884.5 ± 3.77
DermatologySVM98.89 ± 0.5596.57 ± 1.7896.33 ± 1.8696.25 ± 1.8996.67 ± 1.66
DT98.12 ± 0.6494.09 ± 2.5593.37 ± 3.0493.23 ± 2.6794.35 ± 1.91
BT98.87 ± 0.5896.08 ± 2.3695.6 ± 2.9495.53 ± 2.6796.61 ± 1.75
RF99.25 ± 0.5797.79 ± 1.897.45 ± 1.9497.51 ± 1.997.76 ± 1.71
AdaBoost98.87 ± 0.5896.08 ± 2.3695.6 ± 2.9495.53 ± 2.6796.61 ± 1.75
IFPIFSC99.03 ± 0.5896.83 ± 2.0196.79 ± 1.8496.67 ± 1.9397.08 ± 1.75
Breast
Cancer
Wisconsin
SVM95.29 ± 2.0795.31 ± 2.3694.67 ± 2.1794.93 ± 2.2195.29 ± 2.07
DT93.03 ± 2.3792.56 ± 2.6192.67 ± 2.592.56 ± 2.5293.03 ± 2.37
BT96.64 ± 1.896.92 ± 1.895.96 ± 2.1496.37 ± 1.9596.64 ± 1.8
RF95.9 ± 1.7695.88 ± 1.9495.4 ± 1.9495.6 ± 1.8995.9 ± 1.76
AdaBoost96.92 ± 1.6597.12 ± 1.796.34 ± 1.9296.68 ± 1.7996.92 ± 1.65
IFPIFSC95.57 ± 1.5995.4 ± 1.8295.18 ± 1.6695.26 ± 1.6995.57 ± 1.59
HCV DataSVM97.89 ± 0.770.03 ± 13.4962.44 ± 13.7972.6 ± 7.5294.72 ± 1.75
DT97.18 ± 0.7163.1 ± 11.3453.11 ± 12.6670.15 ± 8.7992.95 ± 1.79
BT97.9 ± 0.5270.93 ± 12.756.71 ± 11.9975.35 ± 8.0394.75 ± 1.29
RF97.76 ± 0.6368.44 ± 14.5254.28 ± 13.1476.44 ± 9.1994.41 ± 1.57
AdaBoost97.9 ± 0.5270.93 ± 12.756.71 ± 11.9975.35 ± 8.0394.75 ± 1.29
IFPIFSC97.92 ± 0.4870.4 ± 10.9557.58 ± 11.8272.78 ± 7.1594.8 ± 1.19
Parkinson’s
Disease
Classification
SVM74.6 ± 0.2974.6 ± 0.2950 ± 085.45 ± 0.1974.6 ± 0.29
DT80.54 ± 3.3574.46 ± 4.4574.34 ± 4.974.25 ± 4.5880.54 ± 3.35
BT91.28 ± 2.0391.87 ± 2.7884.75 ± 3.5587.44 ± 3.0991.28 ± 2.03
RF87.17 ± 2.2287.78 ± 3.7377.34 ± 3.9280.53 ± 3.7787.17 ± 2.22
AdaBoost90.29 ± 2.3791 ± 3.0882.89 ± 4.3485.77 ± 3.8490.29 ± 2.37
IFPIFSC94.83 ± 1.8593.6 ± 2.2792.69 ± 3.0593.08 ± 2.5694.83 ± 1.85
Mice
Protein
Expression
SVM100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
DT100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
BT78.48 ± 0.0113.93 ± 0.0312.5 ± 024.45 ± 0.0513.93 ± 0.03
RF100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
AdaBoost78.48 ± 0.0113.93 ± 0.0312.5 ± 024.45 ± 0.0513.93 ± 0.03
IFPIFSC100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
Semeion
Handwritten
Digit
SVM97.81 ± 0.7895.05 ± 2.5192.57 ± 3.2993.67 ± 2.3697.81 ± 0.78
DT93.07 ± 1.5381.28 ± 4.6980.16 ± 3.6780.48 ± 3.5793.07 ± 1.53
BT100 ± 0100 ± 0100 ± 0100 ± 0100 ± 0
RF96.7 ± 0.8897.19 ± 1.5384.12 ± 4.3489.19 ± 3.3196.7 ± 0.88
AdaBoost97.93 ± 1.1996.35 ± 3.6191.88 ± 4.0393.89 ± 3.4897.93 ± 1.19
IFPIFSC98.15 ± 0.7397.21 ± 1.9192.27 ± 3.2594.49 ± 2.2598.15 ± 0.73
Car
Evaluation
SVM92.53 ± 0.779.4 ± 4.2775.88 ± 4.276.98 ± 3.6985.07 ± 1.39
DT97.82 ± 0.4890.42 ± 2.8791.52 ± 3.6590.66 ± 2.7595.64 ± 0.96
BT98.51 ± 0.4690.14 ± 3.2594.03 ± 3.1391.67 ± 3.0697.02 ± 0.92
RF98.94 ± 0.4794.21 ± 2.8496.29 ± 2.795.1 ± 2.6197.88 ± 0.94
AdaBoost98.51 ± 0.4690.14 ± 3.2594.03 ± 3.1391.67 ± 3.0697.02 ± 0.92
IFPIFSC97.97 ± 0.5190.52 ± 3.2790.47 ± 3.4590.21 ± 2.5895.94 ± 1.01
Wireless
Indoor
Localization
SVM99 ± 0.3598.02 ± 0.6897.99 ± 0.6997.99 ± 0.6997.99 ± 0.69
DT98.52 ± 0.497.08 ± 0.7897.05 ± 0.897.04 ± 0.897.05 ± 0.8
BT99.08 ± 0.3198.18 ± 0.6198.16 ± 0.6398.16 ± 0.6398.16 ± 0.63
RF99.09 ± 0.3498.21 ± 0.6698.19 ± 0.6898.19 ± 0.6898.19 ± 0.68
AdaBoost99.08 ± 0.3198.18 ± 0.6198.16 ± 0.6398.16 ± 0.6398.16 ± 0.63
IFPIFSC99.15 ± 0.3298.33 ± 0.6498.31 ± 0.6598.31 ± 0.6598.31 ± 0.65
Mean
Performance
Results
SVM90.99 ± 2.0183.07 ± 4.9477.74 ± 4.9682.68 ± 4.2583.8 ± 3.43
DT90.41 ± 2.4880.18 ± 5.6477.84 ± 5.5780.75 ± 4.7883.16 ± 4.09
BT89.55 ± 2.9276.33 ± 5.8972.12 ± 5.9878.26 ± 4.877.45 ± 4.64
RF93.59 ± 1.9687.85 ± 4.6284.12 ± 4.9887.04 ± 4.0288.85 ± 3.31
AdaBoost89.39 ± 3.0276.16 ± 6.0771.5 ± 6.4678.01 ± 4.8477.29 ± 4.74
IFPIFSC94.34 ± 1.8588.02 ± 4.2685.86 ± 4.4687.65 ± 3.7889.44 ± 3.09
Acc, Pre, Rec, MacF, and MicF results and their standard deviations (SD) are presented in percentage. The best performance results are shown in bold.
Table 6. Ranking numbers of the best results for all non-fuzzy-based classifiers.
Table 6. Ranking numbers of the best results for all non-fuzzy-based classifiers.
ClassifiersAccPreRecMacFMicFTotal Rank
SVM1/201/202/201/201/206/100
DT1/201/201/201/201/205/100
BT2/203/202/202/202/2011/100
RF7/208/206/208/207/2036/100
AdaBoost1/202/201/201/201/206/100
IFPIFSC11/209/2011/2010/2011/2052/100
Table 7. Ranking numbers of the best results of IFPIFSC over the others.
Table 7. Ranking numbers of the best results of IFPIFSC over the others.
ClassifiersAccPreRecMacFMicF
IFPIFSC versus SVM2019172020
IFPIFSC versus DT2020191920
IFPIFSC versus BT1515161415
IFPIFSC versus RF1211131112
IFPIFSC versus AdaBoost1616171516
Table 8. Time complexities based on big O notation of the classifiers.
Table 8. Time complexities based on big O notation of the classifiers.
ClassifierTime Complexity
Fuzzy kNN O ( n 2 log k )
FSSC O ( m l )
FussCyier O ( m l )
HDFSSC O ( m l )
FPFSCC O ( m l )
FPFSNHC O ( m l )
FPFS-EC O ( m n )
FPFS-AC O ( m n )
FPFS-CMC O ( m 2 + m n )
FPFS-kNN(P) O ( m 2 l )
FPFS-kNN(S) O ( m 2 l )
FPFS-kNN(K) O ( m 2 l )
SVM O ( m 2 n 2 )
DT O ( m n log n )
BT O ( t m n log n )
RF O ( t m n log n )
AdaBoost O ( t m n log n )
IFPIFSC O ( m n )
k is the number of the nearest neighbours, m is the sample number of the training data, n is the parameter number of the training data, l is the class number of the data, and t is the number of tree.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Memiş, S.; Arslan, B.; Aydın, T.; Enginoğlu, S.; Camcı, Ç. Distance and Similarity Measures of Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Matrices and Their Applications to Data Classification in Supervised Learning. Axioms 2023, 12, 463. https://doi.org/10.3390/axioms12050463

AMA Style

Memiş S, Arslan B, Aydın T, Enginoğlu S, Camcı Ç. Distance and Similarity Measures of Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Matrices and Their Applications to Data Classification in Supervised Learning. Axioms. 2023; 12(5):463. https://doi.org/10.3390/axioms12050463

Chicago/Turabian Style

Memiş, Samet, Burak Arslan, Tuğçe Aydın, Serdar Enginoğlu, and Çetin Camcı. 2023. "Distance and Similarity Measures of Intuitionistic Fuzzy Parameterized Intuitionistic Fuzzy Soft Matrices and Their Applications to Data Classification in Supervised Learning" Axioms 12, no. 5: 463. https://doi.org/10.3390/axioms12050463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop