Next Article in Journal
Stability of the Stochastic Ginzburg–Landau–Newell Equations in Two Dimensions
Previous Article in Journal
A Characterization of Normal Injective and Normal Projective Hypermodules
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intuitionistic Fuzzy Granular Matrix: Novel Calculation Approaches for Intuitionistic Fuzzy Covering-Based Rough Sets

by
Jingqian Wang
1 and
Xiaohong Zhang
1,2,*
1
School of Mathematics and Data Science, Shaanxi University of Science and Technology, Xi’an 710021, China
2
Shaanxi Joint Laboratory of Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(6), 411; https://doi.org/10.3390/axioms13060411
Submission received: 16 April 2024 / Revised: 23 May 2024 / Accepted: 13 June 2024 / Published: 18 June 2024

Abstract

:
Intuitionistic fuzzy (IF) β -minimal description operators can deal with noise data in the IF covering-based rough set theory. That is to say, they can be used to find data that we need in IF environments. For an IF β -covering approximation space (i.e., an IF environment) with a high cardinality, it would be tedious and complicated to use IF set representations to calculate them. Therefore, it is necessary to find a quick method to obtain them. In this paper, we present the notion of IF β -maximal description based on the definition of IF β -minimal description, along with the concepts of IF granular matrix and IF reduction. Moreover, we propose matrix calculation methods for IF covering-based rough sets, such as IF β -minimal descriptions, IF β -maximal descriptions, and IF reductions. Firstly, the notion of an IF granular matrix is presented, which is used to calculate IF β -minimal description. Secondly, inspired by IF β -minimal description, we give the notion of IF β -maximal description. Furthermore, the matrix representations of IF β -maximal descriptions are presented. Next, two types of reductions for IF β -covering approximation spaces via IF β -minimal and fuzzy β -minimal descriptions are presented, along with their matrix representations. Finally, the new calculation methods are compared with corresponding set representations by carrying out several experiments.

1. Introduction

Covering-based rough set theory [1] was proposed to deal with the type of covering data, which enriched classical rough set theory [2]. Nearly forty covering rough set models [3,4,5] have been developed in covering approximation space. These models popularized their application to practical problems such as decision rule synthesis [6,7,8], knowledge reduction [9,10,11] and other fields [12,13,14]. Using set representations to investigate issues in real-life problems would be complicated and tedious in a covering approximation space with a high cardinality. As computer-implemented methods, matrix approaches are the ideal tools for managing this problem; these include matrices for axiomatizing three types of covering approximation operators [15], matrices for studying knowledge reduction in dynamic covering decision information systems [16], matrices for representing 32 pairs of neighborhood-based upper and lower approximation operators [17], and matrices for minimal and maximal descriptions in covering-based rough sets [18].
Fuzzy set theory [19] addresses the issue of how to understand and manipulate imperfect knowledge. It and rough set theory are related, but distinct and complementary [20]. The first type of fuzzy rough set model based on an fuzzy similarity relation was established in [21]. Then, some essential research on fuzzy rough set models was finished by different fuzzy logical connectives [22,23] and fuzzy relations [24,25]. Recently, some fuzzy rough set models were constructed under a fuzzy covering approximation space [26,27]. In particular, Ma [28] presented the concept of a fuzzy β -covering approximation space by replacing “1” with a parameter β , where “1” is a condition for the definition of fuzzy covering. Inspired by Ma’s work, many fuzzy covering-based rough set models were established. For example, several newly important notions and other types of fuzzy β -covering rough set models are presented in [29]. Multigranulation fuzzy rough covering models based on fuzzy β -neighborhoods were used to solve the problem of multi-criteria group decision making in [30]. At the same time, the matrix approaches are the most used in these studies. For example, Ma [28] proposed the matrix representations of two pairs of fuzzy β -covering approximation operators. Yang and Hu [31] used matrices to represent three pairs of L-fuzzy covering-based approximation operators, as well as other fuzzy covering approximation operators in [32]. Wang et al. [33] used matrices to calculate fuzzy β -minimal descriptions, β -maximal descriptions and fuzzy β -neighborhoods.
As a generalization of fuzzy set theory, intuitionistic fuzzy (IF) set theory [34] expresses stronger information uncertainty. Therefore, IF β -covering rough set models [35] were first presented in IF β -covering approximation spaces. They were extended to other models [36,37], which were used in decision making and feature selection. Since an IF set explains the degrees of membership and non-membership of an element, it is difficult to use matrices to study these IF β -covering rough sets. To solve this problem, we use matrix approaches in IF β -covering approximation spaces, providing a new viewpoint to investigate IF β -covering rough set models and optimize complex computations expressed by IF sets. The motivations and contents of this paper are listed as follows:
  • Huang et al. [35] presented the notion of IF β -minimal description. But the dual notion of IF β -maximal description is not proposed. Therefore, this new notion will be given in this paper, which reflects a different method of information screening.
  • In [33], matrix methods are used for calculating minimal and maximal descriptions in covering approximation spaces. In [33,38], fuzzy matrix methods are used for calculating fuzzy β -minimal and fuzzy β -maximal descriptions in fuzzy β -covering approximation spaces. Therefore, we can also present IF matrix methods for calculating IF β -minimal and IF β -maximal descriptions in IF β -covering approximation spaces.
  • There are many different notions of reductions in covering and fuzzy β -covering approximation spaces, respectively. It is interesting to define reductions in IF β -covering approximation spaces by IF β -minimal and IF β -maximal descriptions in this paper, respectively. Based on the matrix representations of IF β -minimal and β -maximal descriptions, these new notions of IF reductions can be represented by matrices.
The rest of this article is arranged as follows: Section 2 reviews some basic definitions about coverings, IF sets and IF β -covering approximation space. In Section 3, the notion of an IF granular matrix is presented in an IF β -covering approximation space. Then, IF matrix approaches are used to calculate the IF β -minimal and β -maximal descriptions. In Section 4, two types of reductions in an IF β -covering approximation space are presented through IF β -minimal and β -maximal descriptions, respectively. Moreover, these two types of reductions are represented in the IF granular matrix. Section 5 compares the existing set representation method with the new IF matrix method through a number of experiments. The advantages and efficiency of the new method are explained from different viewpoints. Section 6 provides the conclusion and prospects.

2. Basic Definitions

This section recalls some fundamental definitions related to coverings, IF sets and IF covering-based rough sets. Suppose U is a nonempty and finite set called a “universe” unless stated to the contrary.
Definition 1
([39,40]). Let U be a universe and C be a family of subsets of U. If no element in C is empty and C = U , then C is called a covering of U. We call ( U , C ) a covering approximation space.
We show the notions of minimal and maximal descriptions in the covering approximation space as follows:
Let C be a covering of U. For any x U , we call
M d C ( x ) = { K C : x K ( S C ) ( x S S K K = S ) }
the minimal description of x [1,41], and call
M D C ( x ) = { K C : x K ( S C ) ( x S K S K = S ) }
the maximal description of x [42].
In [18], we showed the advantages of matrix approaches for obtaining the minimal description and the maximal description in covering rough sets. Moreover, in [33], we showed the advantages of fuzzy minimal description and fuzzy maximal description in fuzzy covering rough sets. For example, they can be used in granular reductions (see Example 17 in [33]), and can also be used to obtain fuzzy neighborhoods (see Figure 4 in [33]).
In the following, we introduce some basic notions of IF set theory.
Definition 2
([34]). Let U be a universe. An intuitionistic fuzzy set (IFS) A in U is defined as follows:
A = { x , μ A ( x ) , ν A ( x ) : x U } ,
where μ A : U [ 0 , 1 ] is called the degree of membership of the element x U to A, ν A : U [ 0 , 1 ] is called the degree of non-membership. They satisfy μ A ( x ) + ν A ( x ) 1 for all x U . The family of all intuitionistic fuzzy sets in U is denoted by I F ( U ) .
We call a , b with 0 < a , b 1 and a + b 1 an IF value. As is well known, for two IF values α = a , b and β = c , d , α β a c and b d .
For any family γ i [ 0 , 1 ] , i I , I N + ( N + is the set of all positive integers), we write i I γ i for the supremum of { γ i : i I } , and i I γ i for the infimum of { γ i : i I } . Some basic operations on I F ( U ) are listed as follows [34]: A , B I F ( U ) ,
(1)
A B iff μ A ( x ) μ B ( x ) and ν B ( x ) ν A ( x ) for all x U ;
(2)
A = B iff A B and B A ;
(3)
A B = { x , μ A ( x ) μ B ( x ) , ν A ( x ) ν B ( x ) : x U } ;
(4)
A B = { x , μ A ( x ) μ B ( x ) , ν A ( x ) ν B ( x ) : x U } ;
(5)
A = { x , ν A ( x ) , μ A ( x ) : x U } .
Definition 3
([35]). Let U be a universe and β = a , b be an IF value. Then, we call C ^ = { C 1 , C 2 , , C m } , with C i I F ( U ) ( i = 1 , 2 , , m ) , an IF β-covering of U, if for any x U there exists C i C ^ such that C i ( x ) β . We also call ( U , C ^ ) an IF β-covering approximation space.
Finally, we introduce some notions in the IF β -covering approximation space.
Definition 4
([35]). Let ( U , C ^ ) be an IF β-covering approximation space. For each x U , the IF β-neighborhood N ˜ C ^ ( x ) β of x induced by C ^ can be defined as follows:
N ˜ C ^ ( x ) β = { C C ^ : C ( x ) β } .
Definition 5
([35]). Let ( U , C ^ ) be an IF β-covering approximation space. For any x U , its IF β-minimal description M d ˜ C ^ β ( x ) is defined as follows:
M d ˜ C ^ β ( x ) = { C C ^ : C ( x ) β ( D C ^ ) ( D ( x ) β D C C = D ) } .
For better reading and understanding, we explain relevant symbols in Table 1.

3. Matrix Representations of IF β-Minimal and β-Maximal Descriptions

In this section, IF β -minimal and β -maximal descriptions are computed by matrices. First, some new matrices and corresponding operations are proposed in an IF β -covering approximation space. Then, several properties of these new matrices are presented. Finally, we give the matrix representations of IF β -minimal and β -maximal descriptions based on the results above.

3.1. Matrix Representations of IF β -Minimal Descriptions

In this subsection, we present some new matrices and matrix operations in the IF β -covering approximation space. Moreover, the matrix representations of IF β -minimal and β -maximal descriptions are presented.
Definition 6.
Let ( U , C ^ ) be an IF β-covering approximation space, where U = { x 1 , , x n } and C ^ = { C 1 , C 2 , , C m } . We call M C ^ = ( C j ( x i ) ) n × m an IF granular matrix representation of C ^ .
Example 1.
Let U = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } and C ^ = { C 1 , C 2 , C 3 , C 4 , C 5 } , where
  C 1 = 0.6 , 0.3 x 1 + 0.5 , 0.2 x 2 + 0.7 , 0.3 x 3 + 0.6 , 0.4 x 4 + 0.7 , 0.3 x 5 + 0.6 , 0.2 x 6 ,   C 2 = 0.7 , 0.3 x 1 + 0.2 , 0.6 x 2 + 0.1 , 0.7 x 3 + 0.6 , 0.2 x 4 + 0.5 , 0.2 x 5 + 0.5 , 0.3 x 6 ,   C 3 = 0.7 , 0.2 x 1 + 0.6 , 0.1 x 2 + 0.8 , 0.1 x 3 + 0.6 , 0.3 x 4 + 0.8 , 0.1 x 5 + 0.6 , 0.1 x 6 ,   C 4 = 0.3 , 0.5 x 1 + 0.5 , 0.5 x 2 + 0.5 , 0.3 x 3 + 0.4 , 0.5 x 4 + 0.6 , 0.2 x 5 + 0.5 , 0.3 x 6 ,   C 5 = 0.5 , 0.1 x 1 + 0.6 , 0.4 x 2 + 0.7 , 0.2 x 3 + 0.6 , 0.3 x 4 + 0.7 , 0.2 x 5 + 0.6 , 0.2 x 6 .
Suppose β = 0.6 , 0.3 . By Definition 3, we know C ^ is an IF β-covering of U. By Definition 6, we have
Axioms 13 00411 i001
Based on Definition 6, another two matrices about M C ^ are proposed in the following definition.
Definition 7.
Let ( U , C ^ ) be an IF β-covering approximation space and M C ^ = ( C j ( x i ) ) n × m be a matrix representation of C ^ , where U = { x 1 , , x n } and C ^ = { C 1 , C 2 , , C m } .
(1)
For any 1 j m , M C ^ ( x i ) = ( α k j ) n × m is called an IF eigenmatrix of x i , where 1 k n ,
α k j = C j ( x k ) , C j ( x i ) β ; 0 , 0 , otherwise .
(2)
M C ^ ¯ = ( n ) m × m is called the IF β-covering number matrix of C ^ .
Remark 1.
In Definition 7, for M C ^ ( x i ) , if C j ( x i ) β , then the j t h column of M C ^ ( x i ) is the j t h of M C ^ ; otherwise, all elements in the j t h column of M C ^ ( x i ) are 0 , 0 .
Example 2 (Continued from Example 1).
By Definition 7, we have
M C ^ ( x 1 ) = 0.6 , 0.3 0.7 , 0.3 0.7 , 0.2 0 , 0 0 , 0 0.5 , 0.2 0.2 , 0.6 0.6 , 0.1 0 , 0 0 , 0 0.7 , 0.3 0.1 , 0.7 0.8 , 0.1 0 , 0 0 , 0 0.6 , 0.4 0.6 , 0.2 0.6 , 0.3 0 , 0 0 , 0 0.7 , 0.3 0.5 , 0.2 0.8 , 0.1 0 , 0 0 , 0 0.6 , 0.2 0.5 , 0.3 0.6 , 0.1 0 , 0 0 , 0 , M C ^ ( x 2 ) = 0 , 0 0 , 0 0.7 , 0.2 0 , 0 0 , 0 0 , 0 0 , 0 0.6 , 0.1 0 , 0 0 , 0 0 , 0 0 , 0 0.8 , 0.1 0 , 0 0 , 0 0 , 0 0 , 0 0.6 , 0.3 0 , 0 0 , 0 0 , 0 0 , 0 0.8 , 0.1 0 , 0 0 , 0 0 , 0 0 , 0 0.6 , 0.1 0 , 0 0 , 0 , M C ^ ( x 3 ) = 0.6 , 0.3 0 , 0 0.7 , 0.2 0 , 0 0.5 , 0.1 0.5 , 0.2 0 , 0 0.6 , 0.1 0 , 0 0.6 , 0.4 0.7 , 0.3 0 , 0 0.8 , 0.1 0 , 0 0.7 , 0.2 0.6 , 0.4 0 , 0 0.6 , 0.3 0 , 0 0.6 , 0.3 0.7 , 0.3 0 , 0 0.8 , 0.1 0 , 0 0.7 , 0.2 0.6 , 0.2 0 , 0 0.6 , 0.1 0 , 0 0.6 , 0.2 , M C ^ ( x 4 ) = 0 , 0 0.7 , 0.3 0.7 , 0.2 0 , 0 0.5 , 0.1 0 , 0 0.2 , 0.6 0.6 , 0.1 0 , 0 0.6 , 0.4 0 , 0 0.1 , 0.7 0.8 , 0.1 0 , 0 0.7 , 0.2 0 , 0 0.6 , 0.2 0.6 , 0.3 0 , 0 0.6 , 0.3 0 , 0 0.5 , 0.2 0.8 , 0.1 0 , 0 0.7 , 0.2 0 , 0 0.5 , 0.3 0.6 , 0.1 0 , 0 0.6 , 0.2 , M C ^ ( x 5 ) = 0.6 , 0.3 0 , 0 0.7 , 0.2 0.3 , 0.5 0.5 , 0.1 0.5 , 0.2 0 , 0 0.6 , 0.1 0.5 , 0.5 0.6 , 0.4 0.7 , 0.3 0 , 0 0.8 , 0.1 0.5 , 0.3 0.7 , 0.2 0.6 , 0.4 0 , 0 0.6 , 0.3 0.4 , 0.5 0.6 , 0.3 0.7 , 0.3 0 , 0 0.8 , 0.1 0.6 , 0.2 0.7 , 0.2 0.6 , 0.2 0 , 0 0.6 , 0.1 0.5 , 0.3 0.6 , 0.2 , M C ^ ( x 6 ) = 0.6 , 0.3 0 , 0 0.7 , 0.2 0 , 0 0.5 , 0.1 0.5 , 0.2 0 , 0 0.6 , 0.1 0 , 0 0.6 , 0.4 0.7 , 0.3 0 , 0 0.8 , 0.1 0 , 0 0.7 , 0.2 0.6 , 0.4 0 , 0 0.6 , 0.3 0 , 0 0.6 , 0.3 0.7 , 0.3 0 , 0 0.8 , 0.1 0 , 0 0.7 , 0.2 0.6 , 0.2 0 , 0 0.6 , 0.1 0 , 0 0.6 , 0.2 ,           M C ^ ¯ = 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 .
Then, a new matrix operation is presented in the following definition.
Definition 8.
Let A = ( α i k ) n × m and B = ( γ k j ) m × s be two matrices, where α i k = a i k + , a i k and γ k j = b k j + , b k j . We define C = A B = ( c i j ) n × s , where
c i j = k = 1 m ( α i k γ k j ) , row i of A and column j of B are not 0 ; 0 , otherwise , and α i k γ k j = 1 , a i k + b k j + a i k b k j ; 0 , otherwise .
In Definition 8, 0 denotes the vector with any element 0 , 0 . Then, two characteristics of M C ^ T ( x ) M C ^ ( x ) ( x U ) are presented in the following two propositions, respectively.
Proposition 1.
Let ( U , C ^ ) be an IF β-covering approximation space and M C ^ T ( x k ) M C ^ ( x k ) = ( a i j ) m × m , where x k U ( k = 1 , 2 , , n ) and C ^ = { C 1 , C 2 , , C m } . Then, a i j = | { x U : ( C i C j ) ( x k ) β C i ( x ) C j ( x ) } | .
Proof. 
a i j = x U ( C i ( x ) C j ( x ) ) , C i ( x k ) β and C j ( x k ) β ; 0 , otherwise .   = | { x U : C i ( x ) C j ( x ) } | , ( C i C j ) ( x k ) β ; 0 , otherwise .   = | { x U : ( C i C j ) ( x k ) β C i ( x ) C j ( x ) } | .
Proposition 2.
Let ( U , C ^ ) be an IF β-covering approximation space, x k U , M C ^ T ( x k ) M C ^ ( x k ) = ( a i j ) m × m and M C ^ ¯ = ( b i j ) m × m . For any ( t { 1 , 2 , , m } ) , a t t = b t t if and only if a t t > 0 .
Proof. 
Suppose U = { x 1 , , x n } and C ^ = { C 1 , C 2 , , C m } . Then,
a t t = | { x U : C t ( x k ) β } | = b t t = n = | { x : x U } | C t ( x k ) β a t t > 0 .
Finally, the matrix representation of the IF β -minimal description is presented in Theorem 1. Let A = ( a i j ) n × n and B = ( b i j ) n × n be two matrices. We define C = A B = ( c i ) n × 1 , where
c i = 1 , a i j = b i j i = j ; 0 , o t h e r w i s e .
Let C ^ = { C 1 , C 2 , , C m } be an IF β -covering of U and C ^ 1 C ^ . We call f ( C ^ 1 ) = ( y i ) m × 1 the membership function of C ^ 1 in C ^ , where
y i = 1 , C i C ^ 1 ; 0 , C i C ^ 1 .
Note that “⊵” is before “⊕” in operations.
Theorem 1.
Let ( U , C ^ ) be an IF β-covering approximation space, where U = { x 1 , , x n } and C ^ = { C 1 , C 2 , , C m } . Then,
f ( M d ˜ C ^ β ( x k ) ) = M C ^ T ( x k ) M C ^ ( x k ) M C ^ ¯ ,   k = 1 , 2 , , n .
Proof. 
Suppose M C ^ T ( x k ) M C ^ ( x k ) = ( a i j ) m × m , M C ^ ¯ = ( b i j ) m × m and f ( M d ˜ C ^ β ( x k ) ) = ( y j ) m × 1 . For any C t C ,
C t M d ˜ C ^ β ( x k ) ( C t ( x k ) β ) ( C j C ^ ( C j ( x k ) β ) ( C j C t C t = C j ) )   ( C t ( x k ) β ) ( j { 1 , 2 , , m } ( C j ( x k ) β ) ( C j C t C t = C j ) )   ( C t ( x k ) β ) ( j { 1 , 2 , , m } ( C j ( x k ) β )   ( | { x U : C t ( x ) C j ( x ) } | = n C t = C j ) )   ( C t ( x k ) β ) ( j { 1 , 2 , , m } ( a t j = b t j t = j ) )   ( a t t > 0 ) ( a t j = b t j t = j )   ( a t t = b t t ) ( a t j = b t j t = j )   y t = 1 .
Hence, f ( M d ˜ C ^ β ( x k ) ) = M C ^ T ( x k ) M C ^ ( x k ) M C ^ ¯ . □
Example 3 Continued from Example 1).
(All M C ^ ( x k ) ( k = 1 , 2 , , 6 ) and M C ^ ¯ are calculated in Examples 1 and 2. Hence,
f ( M d ˜ C ^ β ( x 1 ) ) = M C ^ T ( x 1 ) M C ^ ( x 1 ) M C ^ ¯   = 0.6 , 0.3 0.5 , 0.2 0.7 , 0.3 0.6 , 0.4 0.7 , 0.3 0.6 , 0.2 0.7 , 0.3 0.2 , 0.6 0.1 , 0.7 0.6 , 0.2 0.5 , 0.2 0.5 , 0.3 0.7 , 0.2 0.6 , 0.1 0.8 , 0.1 0.6 , 0.3 0.8 , 0.1 0.6 , 0.1 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0 0 , 0   0.6 , 0.3 0.7 , 0.3 0.7 , 0.2 0 , 0 0 , 0 0.5 , 0.2 0.2 , 0.6 0.6 , 0.1 0 , 0 0 , 0 0.7 , 0.3 0.1 , 0.7 0.8 , 0.1 0 , 0 0 , 0 0.6 , 0.4 0.6 , 0.2 0.6 , 0.3 0 , 0 0 , 0 0.7 , 0.3 0.5 , 0.2 0.8 , 0.1 0 , 0 0 , 0 0.6 , 0.2 0.5 , 0.3 0.6 , 0.1 0 , 0 0 , 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6   = 6 3 0 0 0 2 6 1 0 0 6 5 6 0 0 0 0 0 0 0 0 0 0 0 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 1 1 0 0 0 ,
i.e., M d ˜ C ^ β ( x 1 ) = { C 1 , C 2 } .
f ( M d ˜ C ^ β ( x 2 ) ) = M C ^ T ( x 2 ) M C ^ ( x 2 ) M C ^ ¯   = 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 0 1 0 0 ,
i.e., M d ˜ C ^ β ( x 2 ) = { C 3 } .
f ( M d ˜ C ^ β ( x 3 ) ) = M C ^ T ( x 3 ) M C ^ ( x 3 ) M C ^ ¯   = 6 0 0 0 1 0 0 0 0 0 6 0 6 0 5 0 0 0 0 0 4 0 1 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 1 0 0 0 1 ,
i.e., M d ˜ C ^ β ( x 3 ) = { C 1 , C 5 } .
f ( M d ˜ C ^ β ( x 4 ) ) = M C ^ T ( x 4 ) M C ^ ( x 4 ) M C ^ ¯   = 0 0 0 0 0 0 6 1 0 1 0 5 6 0 5 0 0 0 0 0 0 4 1 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 1 1 0 1 ,
i.e., M d ˜ C ^ β ( x 4 ) = { C 2 , C 3 , C 5 } .
f ( M d ˜ C ^ β ( x 5 ) ) = M C ^ T ( x 5 ) M C ^ ( x 5 ) M C ^ ¯   = 6 0 0 5 1 0 0 0 0 0 6 0 6 6 5 0 0 0 6 0 4 0 1 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 1 0 0 1 0 ,
i.e., M d ˜ C ^ β ( x 5 ) = { C 1 , C 4 } .
f ( M d ˜ C ^ β ( x 6 ) ) = M C ^ T ( x 6 ) M C ^ ( x 6 ) M C ^ ¯   = 6 0 0 0 1 0 0 0 0 0 6 0 6 0 5 0 0 0 0 0 4 0 1 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 1 0 0 0 1 ,
i.e., M d ˜ C ^ β ( x 6 ) = { C 1 , C 5 } .

3.2. Matrix Representations of IF β -Maximal Descriptions

Based on Section 3.1, we present the matrix representation of the IF β -maximal description in this subsection. Firstly, the concept of IF β -maximal description is given in the following definition.
Definition 9.
Let ( U , C ^ ) be an IF β-covering approximation space. For any x U , its IF β-maximal description M D ˜ C ^ β ( x ) is defined as follows:
M D ˜ C ^ β ( x ) = { C C ^ : C ( x ) β ( D C ^ ) ( D ( x ) β C D C = D ) } .
To investigate the matrix representation of the IF β -maximal description, another new matrix operation is presented in the following definition.
Definition 10.
Let A = ( α i k ) n × m and B = ( γ k j ) m × s be two matrices, where α i k = a i k + , a i k and γ k j = b k j + , b k j . We define C = A B = ( c i j ) n × s , where
c i j = k = 1 m ( α i k γ k j ) , row i of A and column j of B are not 0 ; 0 , otherwise ,   and α i k γ k j = 1 , a i k + b k j + a i k b k j ; 0 , otherwise .
By Definition 10, two characteristics of M C ^ T ( x ) M C ^ ( x ) ( x U ) are presented in the following two propositions.
Proposition 3.
Let C ^ = { C 1 , C 2 , , C m } be an IF β-covering of U = { x 1 , , x n } , x k U and M C ^ T ( x k ) M C ^ ( x k ) = ( a i j ) m × m . Then, a i j = | { x U : ( C i C j ) ( x k ) β C i ( x ) C j ( x ) } | .
Proof. 
a i j = x U ( C i ( x ) C j ( x ) ) , C i ( x k ) β and C j ( x k ) β ; 0 , otherwise .   = | { x U : C i ( x ) C j ( x ) } | , ( C i C j ) ( x k ) β ; 0 , otherwise .   = | { x U : ( C i C j ) ( x k ) β C i ( x ) C j ( x ) } | .
Proposition 4.
Let C ^ = { C 1 , C 2 , , C m } be an IF β-covering of U = { x 1 , , x n } , x k U , M C ^ T ( x k ) M C ^ ( x k ) = ( a i j ) m × m and M C ^ ¯ = ( b i j ) m × m . a t t = b t t if and only if a t t > 0 ( t { 1 , 2 , , m } ) .
Proof. 
a t t = | { x U : C t ( x k ) β } | = b t t = n = | { x : x U } | C t ( x k ) β a t t > 0 .     □
Finally, the matrix representation of the IF β -maximal description is presented in the following theorem. Note that “⊴” is before “⊕” in operations.
Theorem 2.
Let C ^ = { C 1 , C 2 , , C m } be an IF β-covering of U = { x 1 , , x n } . Then,
f ( M D ˜ C ^ β ( x k ) ) = M C ^ T ( x k ) M C ^ ( x k ) M C ^ ¯ ,   k = 1 , 2 , , n .
Proof. 
Suppose M C ^ T ( x k ) M C ^ ( x k ) = ( a i j ) m × m , M C ^ ¯ = ( b i j ) m × m and f ( M D ˜ C ^ β ( x k ) ) = ( y j ) m × 1 . For any C t C ,
C t M D ˜ C ^ β ( x k ) ( C t ( x k ) β ) ( C j C ^ ( C j ( x k ) β ) ( C j C t C t = C j ) )   ( C t ( x k ) β ) ( j { 1 , 2 , , m } ( C j ( x k ) β ) ( C j C t C t = C j ) )   ( C t ( x k ) β ) ( j { 1 , 2 , , m } ( C j ( x k ) β )   ( | { x U : C t ( x ) C j ( x ) } | = n C t = C j ) )   ( C t ( x k ) β ) ( j { 1 , 2 , , m } ( a t j = b t j t = j ) )   ( a t t = b t t ) ( a t j = b t j t = j )   y t = 1 .
Hence, f ( M D ˜ C ^ β ( x k ) ) = M C ^ T ( x k ) M C ^ ( x k ) M C ^ ¯ . □
Example 4 (Continued from Example 1).
All M C ^ ( x k ) ( k = 1 , 2 , , 6 ) and M C ^ ¯ are calculated in Example 2. Then,
f ( M D ˜ C ^ β ( x 1 ) ) = M C ^ T ( x 1 ) M C ^ ( x 1 ) M C ^ ¯   = 6 2 6 0 0 3 6 5 0 0 0 1 6 0 0 0 0 0 0 0 0 0 0 0 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 1 1 0 0 ,
i.e., M D ˜ C ^ β ( x 1 ) = { C 2 , C 3 } .
f ( M D ˜ C ^ β ( x 2 ) ) = M C ^ T ( x 2 ) M C ^ ( x 2 ) M C ^ ¯   = 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 0 1 0 0 ,
i.e., M D ˜ C ^ β ( x 2 ) = { C 3 } .
f ( M D ˜ C ^ β ( x 3 ) ) = M C ^ T ( x 3 ) M C ^ ( x 3 ) M C ^ ¯   = 6 0 6 0 4 0 0 0 0 0 0 0 6 0 1 0 0 0 0 0 1 0 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 0 1 0 1 ,
i.e., M D ˜ C ^ β ( x 3 ) = { C 3 , C 5 } .
f ( M D ˜ C ^ β ( x 4 ) ) = M C ^ T ( x 4 ) M C ^ ( x 4 ) M C ^ ¯   = 0 0 0 0 0 0 6 5 0 4 0 1 6 0 1 0 0 0 0 0 0 1 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 1 1 0 1 ,
i.e., M D ˜ C ^ β ( x 4 ) = { C 2 , C 3 , C 5 } .
f ( M D ˜ C ^ β ( x 5 ) ) = M C ^ T ( x 5 ) M C ^ ( x 5 ) M C ^ ¯   = 6 0 6 0 4 0 0 0 0 0 0 0 6 0 1 5 0 6 6 6 1 0 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 0 1 0 1 ,
i.e., M D ˜ C ^ β ( x 5 ) = { C 3 , C 5 } .
f ( M D ˜ C ^ β ( x 6 ) ) = M C ^ T ( x 6 ) M C ^ ( x 6 ) M C ^ ¯   = 6 0 6 0 4 0 0 0 0 0 0 0 6 0 1 0 0 0 0 0 1 0 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 0 1 0 1 ,
i.e., M D ˜ C ^ β ( x 6 ) = { C 3 , C 5 } .

4. Matrix Approaches for Reductions in IF β -Covering Approximation Spaces

In this section, we present two kinds of reductions in IF β -covering approximation spaces based on IF β -minimal and β -maximal descriptions, respectively. Moreover, they are calculated by matrices.

4.1. Reductions of IF β -Covering Approximation Spaces via IF β -Minimal Descriptions

The definitions of IF β -minimal reduction and corresponding matrix approaches are mainly presented in this subsection. Firstly, the notion of the subspace of the original IF β -covering approximation space is presented in the following definition.
Definition 11.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ C ^ . We call ( U , D ^ ) an IF sub-β-covering approximation space of ( U , C ^ ) if D ^ is also an IF β-covering of U. The family of all IF sub-β-covering approximation spaces of ( U , C ^ ) is denoted by S ( C ^ ) .
By Definition 11, C ^ S ( C ^ ) for any IF β -covering approximation space ( U , C ^ ) . We denote N ˜ C ^ β ( x ) = { C C ^ : C ( x ) β } . Then, Propositions 5 and 6 show two properties of IF β -minimal descriptions in the IF sub- β -covering approximation space.
Proposition 5.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ S ( C ^ ) . For any x U , if | N ˜ C ^ β ( x ) | = 1 , then M d ˜ C ^ β ( x ) = M d ˜ D ^ β ( x ) .
Proof. 
If | N ˜ C ^ β ( x ) | = 1 , then we suppose N ˜ C ^ β ( x ) = { C } , where C C ^ and C ( x ) β . Since D ^ S ( C ^ ) , C D ^ . Hence, N ˜ D ^ β ( x ) = { C } . By Definition 5, M d ˜ C ^ β ( x ) = { C C ^ : C ( x ) β ( D C ^ ) ( D ( x ) β D C C = D ) } = { C N ˜ C ^ β ( x ) : D N ˜ C ^ β ( x ) D C C = D } = { C } and M d ˜ D ^ β ( x ) = { C N ˜ D ^ β ( x ) : D N ˜ D ^ β ( x ) D C C = D } = { C } . Therefore, M d ˜ C ^ β ( x ) = M d ˜ D ^ β ( x ) . □
Proposition 6.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ S ( C ^ ) . For any x U , if N ˜ C ^ β ( x ) = N ˜ D ^ β ( x ) , then M d ˜ C ^ β ( x ) = M d ˜ D ^ β ( x ) .
Proof. 
Since N ˜ C ^ β ( x ) = N ˜ D ^ β ( x ) , M d ˜ C ^ β ( x ) = { C C ^ : C ( x ) β ( D C ^ ) ( D ( x ) β D C C = D ) } = { C N ˜ C ^ β ( x ) : D N ˜ C ^ β ( x ) D C C = D } = { C N ˜ D ^ β ( x ) : D N ˜ D ^ β ( x ) D C C = D } = M d ˜ D ^ β ( x ) . □
The converse of Proposition 3 is incorrect. That is to say, “for any x U , if M d ˜ C ^ β ( x ) = M d ˜ D ^ β ( x ) , then N ˜ C ^ β ( x ) = N ˜ D ^ β ( x ) ” is not true. We use the following example to explain this.
Example 5 (Continued from Example 1).
In Example 1, C ^ = { C 1 , , C 5 } is an IF β-covering of U. In Example 3, we have f ( M d ˜ C ^ β ( x 5 ) ) = ( 1 , 0 , 0 , 1 , 0 ) T , i.e., M d ˜ C ^ β ( x 5 ) = { C 1 , C 4 } . Suppose D ^ = { C 1 , , C 4 } . Then, D ^ S ( C ^ ) .
Axioms 13 00411 i002
f ( M d ˜ D ^ β ( x 5 ) ) = M D ^ T ( x 5 ) M D ^ ( x 5 ) M D ^ ¯ = 6 0 0 5 0 0 0 0 6 0 6 6 0 0 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 1 0 0 1 ,
i.e., M d ˜ D ^ β ( x 5 ) = { C 1 , C 4 } . Hence, M d ˜ C ^ β ( x 5 ) = M d ˜ D ^ β ( x 5 ) = { C 1 , C 4 } . But N ˜ C ^ β ( x 5 ) = { C 1 , C 3 , C 4 , C 5 } and N ˜ D ^ β ( x 5 ) = { C 1 , C 3 , C 4 } , i.e., N ˜ C ^ β ( x 5 ) N ˜ D ^ β ( x 5 ) .
Definition 12.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ S ( C ^ ) . D ^ is called the IF β-minimal reduction of C ^ if D ^ satisfies the following conditions:
(1)
For any x U , M d ˜ C ^ β ( x ) = M d ˜ D ^ β ( x ) ;
(2)
For any E ^ S ( D ^ ) { D ^ } , there exists x U such that M d ˜ E ^ β ( x ) M d ˜ D ^ β ( x ) .
Let A be a family of subsets of I F ( U ) . We denote M i n ( A ) = { X A : Y A , Y X X = Y } .
Proposition 7.
Let ( U , C ^ ) be an IF β-covering approximation space. D ^ is the IF β-minimal reduction of C ^ if and only if D ^ M i n ( { E ^ S ( C ^ ) : x U , M d ˜ C ^ β ( x ) = M d ˜ E ^ β ( x ) } ) .
Proof. 
By Definition 12, it is immediate. □
Theorem 3.
Let ( U , C ^ ) be an IF β-covering approximation space. The IF β-minimal reduction of C ^ is unique.
Proof. 
By Definition 12, the existence of the IF β -minimal reduction is true. Then, we use proof by contradiction to prove uniqueness. Suppose D ^ 1 is an IF β -minimal reduction of C ^ and D ^ 2 is the other one. Then, M d ˜ C ^ β ( x ) = M d ˜ D ^ 1 β ( x ) = M d ˜ D ^ 2 β ( x ) for any x U . Hence, there exists K D ^ 2 D ^ 1 such that K M d ˜ D ^ 1 β ( x ) for any x U . That is to say, K M d ˜ D ^ 2 β ( x ) for any x U . D ^ 2 { K } is also an IF β -covering, since for any x U there exists C M d ˜ D ^ 2 β ( x ) such that C ( x ) β . Therefore, M d ˜ D ^ 2 { K } β ( x ) = M d ˜ D ^ 2 β ( x ) for any x U . So, D ^ 2 is not an IF β -minimal reduction of C ^ , which is contradictory with D ^ 2 being an IF β -minimal reduction of C ^ . Thus, the IF β -minimal reduction of C ^ is unique. □
By Proposition 7 and Theorem 3, the steps of calculating all IF β -minimal reductions in the IF β -covering approximation space ( U , C ^ ) are presented as follows:
Step 1: Compute the family of all IF sub- β -covering approximation spaces of ( U , C ^ ) according to Definition 11, i.e., S ( C ^ ) .
Step 2: For any x U and D ^ S ( C ^ ) , compute M d ˜ D ^ β ( x ) according to Theorem 1, i.e., f ( M d ˜ D ^ β ( x ) ) = M D ^ T ( x ) M D ^ ( x ) M D ^ ¯ .
Step 3: Compute F = M i n ( { D ^ S ( C ^ ) : x U , M d ˜ C ^ β ( x ) = M d ˜ D ^ β ( x ) } ) . The element of F is the IF β -minimal reduction of C ^ according to Proposition 7 and Theorem 3.
Hence, the IF β -minimal reduction of C ^ belongs to F.

4.2. Reductions of IF β -Covering Approximation Spaces via IF β -Maximal Descriptions

Based on Section 4.1, we present the definition of IF β -maximal reduction and corresponding matrix approaches in this subsection. Firstly, we present two properties of IF β -minimal descriptions in the IF sub- β -covering approximation space in the following two propositions, respectively.
Proposition 8.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ S ( C ^ ) . For any x U , if | N ˜ C ^ β ( x ) | = 1 , then M D ˜ C ^ β ( x ) = M D ˜ D ^ β ( x ) .
Proof. 
If | N ˜ C ^ β ( x ) | = 1 , then we suppose N ˜ C ^ β ( x ) = { C } , where C C ^ and C ( x ) β . Since D ^ S ( C ^ ) , C D ^ . Hence, N ˜ D ^ β ( x ) = { C } . By Definition 9, M D ˜ C ^ β ( x ) = { C C ^ : C ( x ) β ( D C ^ ) ( D ( x ) β C D C = D ) } = { C N ˜ C ^ β ( x ) : D N ˜ C ^ β ( x ) C D C = D } = { C } and M d ˜ D ^ β ( x ) = { C N ˜ D ^ β ( x ) : D N ˜ D ^ β ( x ) C D C = D } = { C } . Therefore, M D ˜ C ^ β ( x ) = M D ˜ D ^ β ( x ) . □
Proposition 9.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ S ( C ^ ) . For any x U , if N ˜ C ^ β ( x ) = N ˜ D ^ β ( x ) , then M D ˜ C ^ β ( x ) = M D ˜ D ^ β ( x ) .
Proof. 
Since N ˜ C ^ β ( x ) = N ˜ D ^ β ( x ) , M D ˜ C ^ β ( x ) = { C C ^ : C ( x ) β ( D C ^ ) ( D ( x ) β C D C = D ) } = { C N ˜ C ^ β ( x ) : D N ˜ C ^ β ( x ) C D C = D } = { C N ˜ D ^ β ( x ) : D N ˜ D ^ β ( x ) C D C = D } = M D ˜ D ^ β ( x ) . □
The converse of Proposition 9 is incorrect. That is to say, “for any x U , if M D ˜ C ^ β ( x ) = M D ˜ D ^ β ( x ) , then N ˜ C ^ β ( x ) = N ˜ D ^ β ( x ) ” is not true. We use the following example to explain this.
Example 6 (Continued from Example 1).
In Example 1, C ^ = { C 1 , , C 5 } is an IF β-covering of U. In Example 4, we have f ( M D ˜ C ^ β ( x 1 ) ) = ( 0 , 1 , 1 , 0 , 0 ) T , i.e., M D ˜ C ^ β ( x 1 ) = { C 2 , C 3 } . Suppose D ^ = { C 1 , C 2 , C 3 } , where C 1 = C 2 , C 2 = C 3 , C 3 = C 5 } . Then, D ^ S ( C ^ ) .
Axioms 13 00411 i003
f ( M D ˜ D ^ β ( x 1 ) ) = M D ^ T ( x 1 ) M D ^ ( x 1 ) M D ^ ¯ = 6 5 0 1 6 0 0 0 0 6 6 6 6 6 6 6 6 6 = 1 1 0 ,
i.e., M D ˜ D ^ β ( x 1 ) = { C 1 , C 2 } = { C 2 , C 3 } . Hence, M D ˜ C ^ β ( x 1 ) = M D ˜ D ^ β ( x 1 ) = { C 2 , C 3 } . But N ˜ C ^ β ( x 1 ) = { C 1 , C 2 , C 3 } and N ˜ D ^ β ( x 1 ) = { C 2 , C 3 } , i.e., N ˜ C ^ β ( x 1 ) N ˜ D ^ β ( x 1 ) .
Definition 13.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ S ( C ^ ) . D ^ is called an IF β-maximal reduction of C ^ if D ^ satisfies the following conditions:
(1)
For any x U , M D ˜ C ^ β ( x ) = M D ˜ D ^ β ( x ) ;
(2)
For any E ^ S ( D ^ ) { D ^ } , there exists x U such that M D ˜ E ^ β ( x ) M D ˜ D ^ β ( x ) .
Proposition 10.
Let ( U , C ^ ) be an IF β-covering approximation space. D ^ is an IF β-maximal reduction of C ^ if and only if D ^ M i n ( { E ^ S ( C ^ ) : x U , M D ˜ C ^ β ( x ) = M D ˜ E ^ β ( x ) } ) .
Proof. 
By Definition 13, it is immediate. □
Theorem 4.
Let ( U , C ^ ) be an IF β-covering approximation space and D ^ S ( C ^ ) . The IF β-maximal reduction of C ^ is unique.
Proof. 
By Definition 13, the existence of the IF β -maximal reduction is true. Then, we use proof by contradiction to prove the uniqueness. Suppose D ^ 1 is an IF β -maximal reduction of C ^ and D ^ 2 is the other one. Then, M D ˜ C ^ β ( x ) = M D ˜ D ^ 1 β ( x ) = M D ˜ D ^ 2 β ( x ) for any x U . Hence, there exists K D ^ 2 D ^ 1 such that K M D ˜ D ^ 1 β ( x ) for any x U . That is to say, K M D ˜ D ^ 2 β ( x ) for any x U . D ^ 2 { K } is also an IF β -covering, since for any x U , there exists C M D ˜ D ^ 2 β ( x ) such that C ( x ) β . Therefore, M D ˜ D ^ 2 { K } β ( x ) = M D ˜ D ^ 2 β ( x ) for any x U . So, D ^ 2 is not an IF β -maximal reduction of C ^ , which is contradictory with D ^ 2 is an IF β -maximal reduction. Thus, the IF β -maximal reduction of C ^ is unique. □
By Proposition 10 and Theorem 4, the steps of calculating the IF β -maximal reduction in the IF β -covering approximation space ( U , C ^ ) are presented as follows:
Step 1: Compute the family of all IF sub- β -covering approximation spaces of ( U , C ^ ) according to Definition 11, i.e., S ( C ^ ) .
Step 2: For any x U and D ^ S ( C ^ ) , compute M D ˜ D ^ β ( x ) according to Theorem 2, i.e., f ( M D ˜ D ^ β ( x ) ) = M D ^ T ( x ) M D ^ ( x ) M D ^ ¯ .
Step 3: Compute F = M i n ( { D ^ S ( C ^ ) : x U , M D ˜ C ^ β ( x ) = M D ˜ D ^ β ( x ) } ) . The element of F is the IF β -maximal reduction of C ^ according to Proposition 10 and Theorem 4.
Example 7.
A customer wants to choose suitable attributes to evaluate a house. Let U = { x 1 , x 2 , , x 6 } be a set of houses and C ^ = { C 1 , C 2 , , C 5 } be five attributes given by merchants, where C 1 , C 2 , , C 5 represent expensive, beautiful, large, convenient traffic and green surroundings, respectively. Suppose C j ( x i ) = μ C j ( x i ) , ν C j ( x i ) , ( i = 1 , 2 , , 6 ; j = 1 , 2 , , 5 ) , where μ C j ( x i ) and ν C j ( x i ) are the degrees of membership and non-membership of the alternative x i to the attribute C j , respectively. Let β = 0.6 , 0.3 be the critical value. Suppose that for each alternative x i there exists the attribute C j such that C j ( x i ) β . It is obvious that C ^ is an IF β-covering presented in Example 1.
Step 1: S ( C ^ ) = { { C 1 , C 2 , C 3 , C 4 , C 5 } , { C 1 , C 3 , C 4 , C 5 } , { C 1 , C 2 , C 3 , C 5 } , { C 2 , C 3 , C 4 , C 5 } ,   { C 1 , C 2 , C 3 , C 4 } , { C 3 , C 4 , C 5 } , { C 1 , C 2 , C 3 } , { C 1 , C 3 , C 4 } , { C 1 , C 3 , C 5 } , { C 2 , C 3 , C 5 } , { C 2 , C 3 ,   C 4 } , { C 3 , C 4 , C 5 } , { C 1 , C 3 } , { C 2 , C 3 } , { C 2 , C 5 } , { C 3 , C 4 } , { C 3 , C 5 } , { C 3 } } .
Step 2: For any x U and D ^ S ( C ^ ) , we compute M D ˜ D ^ β ( x ) by matrices. All M D ˜ C ^ β ( x ) for any x U were calculated in Example 4. Here, we show the process about D ^ = { C 2 , C 3 , C 4 , C 5 } only.
Suppose D ^ = { C 1 , C 2 , C 3 , C 4 } , where C 1 = C 2 , C 2 = C 3 , C 3 = C 4 , C 4 = C 5 . Then,
Axioms 13 00411 i004
Hence,
f ( M D ˜ D ^ β ( x 1 ) ) = M D ^ T ( x 1 ) M D ^ ( x 1 ) M D ^ ¯ = 6 5 0 0 1 6 0 0 0 0 0 0 0 0 0 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 1 1 0 0 ,
i.e., M D ˜ D ^ β ( x 1 ) = { C 1 , C 2 } = { C 2 , C 3 } .
f ( M D ˜ D ^ β ( x 2 ) ) = M D ^ T ( x 2 ) M D ^ ( x 2 ) M D ^ ¯ = 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 1 0 0 ,
i.e., M D ˜ D ^ β ( x 2 ) = { C 2 } = { C 3 } .
f ( M D ˜ D ^ β ( x 3 ) ) = M D ^ T ( x 3 ) M D ^ ( x 3 ) M D ^ ¯ = 0 0 0 0 0 6 0 1 0 0 0 0 0 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 1 0 1 ,
i.e., M D ˜ D ^ β ( x 3 ) = { C 2 , C 4 } = { C 3 , C 5 } .
f ( M D ˜ D ^ β ( x 4 ) ) = M D ^ T ( x 4 ) M D ^ ( x 4 ) M D ^ ¯ = 6 5 0 4 1 6 0 1 0 0 0 0 1 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 1 1 0 1 ,
i.e., M D ˜ D ^ β ( x 4 ) = { C 1 , C 2 , C 4 } = { C 2 , C 3 , C 5 } .
f ( M D ˜ D ^ β ( x 5 ) ) = M D ^ T ( x 5 ) M D ^ ( x 5 ) M D ^ ¯ = 0 0 0 0 0 6 0 1 0 6 6 6 0 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 1 0 1 ,
i.e., M D ˜ D ^ β ( x 5 ) = { C 2 , C 4 } = { C 3 , C 5 } .
f ( M D ˜ D ^ β ( x 6 ) ) = M D ^ T ( x 6 ) M D ^ ( x 6 ) M D ^ ¯ = 0 0 0 0 0 6 0 1 0 0 0 0 0 5 0 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 = 0 1 0 1 ,
i.e., M D ˜ D ^ β ( x 6 ) = { C 2 , C 4 } = { C 3 , C 5 } .
Step 3: M i n ( { D ^ S ( C ^ ) : x U , M D ˜ C ^ β ( x ) = M D ˜ D ^ β ( x ) } ) = M i n ( { { C 1 , C 2 , C 3 , C 5 } ,   { C 2 , C 3 , C 4 , C 5 } , { C 2 , C 3 , C 5 } } ) = { { C 2 , C 3 , C 5 } } . Therefore, { C 2 , C 3 , C 5 } is the IF β-maximal reduction of C ^ .

5. Experimental Evaluations

Compared with the set representations, it is necessary to show the advantage of matrix representations of IF β -minimal and β -maximal descriptions. In this section, we call the presented matrix methods of IF β -minimal and β -maximal descriptions as “M-IFMin” and “M-IFMax”, respectively. Hence, we compare them with the set-based algorithms of IF β -minimal and β -maximal descriptions (which are named “S-IFMin” and “S-IFMax”, respectively) through several experiments.

5.1. The Process of Experiments

We construct some IF β -covering approximation spaces to run M-IFMin, M-IFMax, S-IFMin and S-IFMax on them. In Definition 6, we know that any IF β -covering approximation space can be seen as an IF granular matrix. The procedure of constructing the IF β -matrix is as follows: (1) The elements of the matrix are IF numbers, which are randomly chosen from { 0 , 0.1 , 0.2 , , 0.9 , 1 } . (2) For any row of the matrix, if the maximal number of the row is less than β , then we denote β as its maximal number. Hence, the matrix is the IF β -matrix, i.e., an IF β -covering approximation space.
Finally, we compare the computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different values of β , sizes of a universe and sizes of an IF β -covering. All of the experiments were carried out on a personal computer with 64-bit Windows 10, Intel(R) Core(TM) i7-8565U CPU @1.80 GHz 1.99 GHz, and 8 GB memory. The programming language was Matlab r2016a.

5.2. Results and Analysis

To compare the computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different values of β in Figure 1, we set the size of U to 200 and the size of C ^ to 600. The one value of β ranges from 0.2 to 0.6 , gradually increasing by a step of 0.1 , and the other value is 0.3 . Figure 1a,c show the computational time of M-IFMin and S-IFMin. In Figure 1a, the first value of β ranges from 0.2 to 0.6 , gradually increasing by a step of 0.1 , and the second value is 0.3 . In Figure 1c, the first value is 0.3 and the second value of β ranges from 0.2 to 0.6 , gradually increasing by a step of 0.1 . Figure 1b,d show the computational time of M-IFMax and S-IFMax. In Figure 1b, the first value of β ranges from 0.2 to 0.6 , gradually increasing by a step of 0.1 , and the second value is 0.3 . In Figure 1d, the first value is 0.3 and the second value of β ranges from 0.2 to 0.6 , gradually increasing by a step of 0.1 . In Figure 1, we can see that the computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax decreases with the gradual increase in the value of β . M-IFMin (or M-IFMax) is more efficient than S-IFMin (or S-IFMax) with different values of β , especially with small values of β .
To compare the computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different sizes of U in Figure 2, we set the value of β to 0.6 , 0.3 and the size of C ^ to 200. The size of U ranges from 200 to 600, gradually increasing by a step of 100. In Figure 2, we can see that the computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax increases with the gradual increase in the size of U. M-IFMin (or M-IFMax) is more efficient than S-IFMin (or S-IFMax) with different sizes of U. Hence, for large universe sizes, M-IFMin (or M-IFMax) is feasible.
To compare the computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different sizes of C ^ in Figure 3, we set the value of β to 0.6 , 0.3 and the size of U to 200. The size of C ^ ranges from 100 to 500, gradually increasing by a step of 100. In Figure 3, we can see that the computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax increases with the gradual increase in the size of U. M-IFMin (or M-IFMax) is more efficient than S-IFMin (or S-IFMax) with different sizes of C ^ , especially with large C ^ .
By Figure 2 and Figure 3, we can see that M-IFMin and M-IFMax are feasible for large U and C ^ , respectively. That is to say, they are scalable on big data sets.

6. Conclusions

In this paper, we mainly use matrix approaches to study IF β -covering rough sets by IF β -minimal and β -maximal descriptions. Moreover, the feasibility of the proposed matrix approaches is studied by several experiments. The main conclusions of this paper are as follows:
  • The matrix representations of IF β -minimal and β -maximal descriptions are proposed. Moreover, the comparative analysis illustrates that the proposed calculus based on matrices is feasible for large IF β -coverings as well as big data sets.
  • Two new types of reductions of IF β -covering approximation spaces are proposed via IF β -minimal and β -maximal descriptions, respectively. They are calculated based on the matrix representations of IF β -minimal and β -maximal descriptions. It is a new viewpoint to study IF β -covering rough sets using IF β -minimal and β -maximal descriptions.
Although the matrix method proposed by us is faster than the existing set method in IF set theory, it is still somewhat time-consuming in the environment of big data, and further faster calculation methods need to be proposed. In the future, the following research topics are deserving of attention. These matrix approaches can be used in fuzzy soft covering-based multi-granulation fuzzy rough sets [43]. Moreover, Choquet-like integrals [44,45] were recently combined with fuzzy rough sets, which can connected with the content of this paper in further research.

Author Contributions

J.W. analyzed the existing work of IF rough sets with matrices, and wrote the paper. X.Z. improved and funded this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Nos. 12201373 and 12271319) and the China Postdoctoral Science Foundation (Grant No. 2023T160402).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhu, W.; Wang, F. Reduction and axiomatization of covering generalized rough sets. Inf. Sci. 2003, 152, 217–230. [Google Scholar] [CrossRef]
  2. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  3. Zhao, Z. Reductions of a covering approximation space from algebraic points of view. Int. J. Approx. Reason. 2023, 153, 101–114. [Google Scholar] [CrossRef]
  4. Shakiba, A.; Hooshmandasl, M.R. Data volume reduction in covering approximation spaces with respect to twenty-two types of covering based rough sets. Int. J. Approx. Reason. 2016, 75, 13–38. [Google Scholar] [CrossRef]
  5. Yao, Y.; Yao, B. Covering based rough set approximations. Inf. Sci. 2012, 200, 91–107. [Google Scholar] [CrossRef]
  6. Du, Y.; Hu, Q.; Zhu, P.; Ma, P. Rule learning for classification based on neighborhood covering reduction. Inf. Sci. 2011, 181, 5457–5467. [Google Scholar] [CrossRef]
  7. Mohammed, A.; Shokry, N.; Ashraf, N. Covering soft rough sets and its topological properties with application. SoftComput. 2023, 27, 4451–4461. [Google Scholar]
  8. Qian, Y.; Liang, J.; Pedrycz, W.; Dang, C. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef]
  9. Jing, Y.; Li, T.; Fujita, H.; Yu, Z.; Wang, B. An incremental attribute reduction approach based on knowledge granularity with a multi-granulation view. Inf. Sci. 2017, 411, 23–38. [Google Scholar] [CrossRef]
  10. Wang, C.; Shi, Y.; Fan, X.; Shao, M. Attribute reduction based on κ-nearest neighborhood rough sets. Int. J. Approx. Reason. 2019, 106, 18–31. [Google Scholar] [CrossRef]
  11. Long, Z.; Cai, M.; Li, Q.; Li, Y.; Cai, W. Convex granules and convex covering rough sets. Eng. Appl. Artif. Intell. 2023, 124, 106509. [Google Scholar] [CrossRef]
  12. Dai, J.; Wang, W.; Xu, Q.; Tian, H. Uncertainty measurement for interval-valued decision systems based on extended conditional entropy. Knowl.-Based Syst. 2012, 27, 443–450. [Google Scholar] [CrossRef]
  13. El-Bably, M.K.; Abo-Tabl, E.A. A topological reduction for predicting of a lung cancer disease based on generalized rough sets. J. Intell. Fuzzy Syst. 2021, 41, 3045–3060. [Google Scholar] [CrossRef]
  14. Huang, Z.; Li, J. Covering based multi-granulation rough fuzzy sets with applications to feature selection. Expert Syst. Appl. 2024, 238, 121908. [Google Scholar] [CrossRef]
  15. Wang, S.; Zhu, W.; Zhu, Q.; Min, F. Characteristic matrix of covering and its application to Boolean matrix decomposition. Inf. Sci. 2014, 263, 186–197. [Google Scholar] [CrossRef]
  16. Lang, G.; Li, Q.; Cai, M.; Yang, T. Characteristic matrixes-based knowledge reduction in dynamic covering decision information systems. Knowl.-Based Syst. 2015, 85, 1–26. [Google Scholar] [CrossRef]
  17. Ma, L. The investigation of covering rough sets by Boolean matrices. Int. J. Approx. Reason. 2018, 100, 69–84. [Google Scholar] [CrossRef]
  18. Wang, J.; Zhang, X.; Liu, C. Grained matrix and complementary matrix: Novel methods for computing information descriptions in covering approximation spaces. Inf. Sci. 2022, 591, 68–87. [Google Scholar] [CrossRef]
  19. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef]
  20. Yao, Y. A comparative study of fuzzy sets and rough sets. Inf. Sci. 1998, 109, 227–242. [Google Scholar] [CrossRef]
  21. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–208. [Google Scholar] [CrossRef]
  22. Morsi, N.N.; Yakout, M.M. Axiomatics for fuzzy rough sets. Fuzzy Sets Syst. 1998, 100, 327–342. [Google Scholar] [CrossRef]
  23. Radzikowska, A.M.; Kerre, E.E. A comparative study of fuzzy rough sets. Fuzzy Sets Syst. 2002, 126, 137–155. [Google Scholar] [CrossRef]
  24. Greco, S.; Matarazzo, B.; Slowinski, R. Rough set processing of vague information using fuzzy similarity relations. In Finite Versus Infinite; Calude, C.S., Paun, G., Eds.; Springer: London, UK, 2000; pp. 149–173. [Google Scholar]
  25. Wu, W.; Mi, J.; Zhang, W. Generalized fuzzy rough sets. Inf. Sci. 2003, 151, 263–282. [Google Scholar] [CrossRef]
  26. D’eer, L.; Cornelis, C.; Godo, L. Fuzzy neighborhood operators based on fuzzy coverings. Fuzzy Sets Syst. 2017, 312, 17–35. [Google Scholar] [CrossRef]
  27. Feng, T.; Zhang, S.; Mi, J. The reduction and fusion of fuzzy covering systems based on the evidence theory. Int. J. Approx. Reason. 2012, 53, 87–103. [Google Scholar] [CrossRef]
  28. Ma, L. Two fuzzy covering rough set models and their generalizations over fuzzy lattices. Fuzzy Sets Syst. 2016, 294, 1–17. [Google Scholar] [CrossRef]
  29. Yang, B.; Hu, B. Fuzzy neighborhood operators and derived fuzzy coverings. Fuzzy Sets Syst. 2019, 370, 1–33. [Google Scholar] [CrossRef]
  30. Zhan, J.; Zhang, X.; Yao, Y. Covering based multigranulation fuzzy rough sets and corresponding applications. Artif. Intell. Rev. 2020, 53, 1093–1126. [Google Scholar] [CrossRef]
  31. Yang, B.; Hu, B. Matrix representations and interdependency on L-fuzzy covering-based approximation operators. Int. J. Approx. Reason. 2018, 96, 57–77. [Google Scholar] [CrossRef]
  32. Yang, B.; Hu, B. A fuzzy covering-based rough set model and its generalization over fuzzy lattice. Inf. Sci. 2016, 367–368, 463–486. [Google Scholar] [CrossRef]
  33. Wang, J.; Zhang, X.; Yao, Y. Matrix approach for fuzzy description reduction and group decision-making with fuzzy β-covering. Inf. Sci. 2022, 597, 53–85. [Google Scholar] [CrossRef]
  34. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  35. Huang, B.; Guo, C.; Li, H.; Feng, G.; Zhou, X. An intuitionistic fuzzy graded covering rough set. Knowl.-Based Syst. 2016, 107, 155–178. [Google Scholar] [CrossRef]
  36. Ye, J.; Zhan, J.; Ding, W.; Fujita, H. A novel fuzzy rough set model with fuzzy neighborhood operators. Inf. Sci. 2021, 544, 266–297. [Google Scholar] [CrossRef]
  37. Jain, P.; Som, T. Multigranular rough set model based on robust intuitionistic fuzzy covering with application to feature selection. Int. J. Approx. Reason. 2023, 156, 16–37. [Google Scholar] [CrossRef]
  38. Liu, C.; Cai, K.; Miao, D.; Qian, J. Novel matrix-based approaches to computing MinD and MaxD in covering-based rough sets. Inf. Sci. 2020, 539, 312–326. [Google Scholar] [CrossRef]
  39. Bonikowski, Z.; Bryniarski, E.; Wybraniec-Skardowska, U. Extensions and intentions in the rough set theory. Inf. Sci. 1998, 107, 149–167. [Google Scholar] [CrossRef]
  40. Pomykala, J.A. Approximation operations in approximation space. Bull. Pol. Acad. Sci. 1987, 35, 653–662. [Google Scholar]
  41. Zhu, W. Relationship among basic concepts in covering-based rough sets. Inf. Sci. 2009, 179, 2478–2486. [Google Scholar] [CrossRef]
  42. Wang, Z.; Shu, L.; Ding, X. Minimal description and maximal description in covering-based rough sets. Fundam. Inform. 2013, 128, 503–526. [Google Scholar] [CrossRef]
  43. Atef, M.; Ali, M.I.; Al-Shami, T.M. Fuzzy soft covering-based multi-granulation fuzzy rough sets and their applications. Comput. Appl. Math. 2021, 40, 115. [Google Scholar] [CrossRef]
  44. Wang, J.; Shao, S.; Zhang, X. Choquet-like integrals with multi-neighborhood approximation numbers for novel covering granular reduction methods. Mathematics 2023, 11, 4650. [Google Scholar] [CrossRef]
  45. Wang, J.; Zhang, X.; Shen, Q. Choquet-like integrals with rough attribute fuzzy measures for data-driven decision-making. IEEE Trans. Fuzzy Syst. 2024, 32, 2825–2836. [Google Scholar] [CrossRef]
Figure 1. Computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different β ( | U | = 200 , | C ^ | = 600 ).
Figure 1. Computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different β ( | U | = 200 , | C ^ | = 600 ).
Axioms 13 00411 g001
Figure 2. Computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different sizes of U ( β = 0.6 , 0.3 , | C ^ | = 200 ).
Figure 2. Computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different sizes of U ( β = 0.6 , 0.3 , | C ^ | = 200 ).
Axioms 13 00411 g002
Figure 3. Computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different sizes of C ^ ( β = 0.6 , 0.3 , | U | = 200 ).
Figure 3. Computational time of M-IFMin, M-IFMax, S-IFMin and S-IFMax with different sizes of C ^ ( β = 0.6 , 0.3 , | U | = 200 ).
Axioms 13 00411 g003
Table 1. Relevant symbols in this paper.
Table 1. Relevant symbols in this paper.
Full NameRelevant Symbol
Original SymbolsCovering approximation space ( U , C )
Minimal description of x M d C ( x )
Maximal description of x M D C ( x )
IF β -covering approximation space ( U , C ^ )
IF β -neighborhood N ˜ C ^ ( x ) β
IF β -minimal description of x M d ˜ β ( C ^ , x )
New SymbolsIF β -maximal description of x M D ˜ β ( C ^ , x )
IF granular matrix representation of C ^ M C ^
IF eigenmatrix of x M C ^ ( x )
IF β -covering number matrix of C ^ M C ^ ¯
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Zhang, X. Intuitionistic Fuzzy Granular Matrix: Novel Calculation Approaches for Intuitionistic Fuzzy Covering-Based Rough Sets. Axioms 2024, 13, 411. https://doi.org/10.3390/axioms13060411

AMA Style

Wang J, Zhang X. Intuitionistic Fuzzy Granular Matrix: Novel Calculation Approaches for Intuitionistic Fuzzy Covering-Based Rough Sets. Axioms. 2024; 13(6):411. https://doi.org/10.3390/axioms13060411

Chicago/Turabian Style

Wang, Jingqian, and Xiaohong Zhang. 2024. "Intuitionistic Fuzzy Granular Matrix: Novel Calculation Approaches for Intuitionistic Fuzzy Covering-Based Rough Sets" Axioms 13, no. 6: 411. https://doi.org/10.3390/axioms13060411

APA Style

Wang, J., & Zhang, X. (2024). Intuitionistic Fuzzy Granular Matrix: Novel Calculation Approaches for Intuitionistic Fuzzy Covering-Based Rough Sets. Axioms, 13(6), 411. https://doi.org/10.3390/axioms13060411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop