Next Article in Journal
Investigating the Relationship between Balanced Composition and Aesthetic Judgment through Computational Aesthetics and Neuroaesthetic Approaches
Previous Article in Journal
Anisotropic (p, q) Equation with Partially Concave Terms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dominance-Based Rough Set Model in Hesitant Fuzzy Information Systems

College of Mathematics and System Science, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(9), 1190; https://doi.org/10.3390/sym16091190
Submission received: 4 August 2024 / Revised: 7 September 2024 / Accepted: 8 September 2024 / Published: 10 September 2024
(This article belongs to the Section Mathematics)

Abstract

:
Hesitant fuzzy information systems have been widely applied in decision-making due to their ability to handle uncertain information. In addition, dominance relationships are often taken into account in many practical decision-making problems. Therefore, it is of great significance to conduct research on hesitant fuzzy information systems involved with dominance relations. In this study, we introduce dominance relations in a hesitant fuzzy information system to make it a dominance-based hesitant fuzzy information system, which can provide a solid new idea for comparing hesitant fuzzy elements. Furthermore, a hesitant fuzzy dominance-based rough set model is constructed, and an attribute reduction method is designed to simplify the dominance-based hesitant fuzzy information system. Further, we propose lower and upper approximation discernibility matrices in the dominance-based hesitant fuzzy decision information system to extract decision rules. In addition, two numerical examples are given to demonstrate the effectiveness of the proposed attribute reduction methods.

1. Introduction

Within the development of science and technology, the volume of data is increasing rapidly. It is worth noting that this vast amount of data contains numerous uncertain phenomena, which have great significance for the investigation of uncertainty theory. As a valuable mathematical tool for handling uncertainty in data analysis and decision-making, rough set theory (RST), initially proposed by Pawlak in 1982 [1], focuses on the upper and lower approximations to characterize uncertain information based on equivalence relations. In order to handle different kinds of data in information systems, scholars have proposed several extended rough set models, including the neighborhood rough set model [2,3], the fuzzy rough set model [4], and the intuitionistic fuzzy rough set model [5]. Meanwhile, RST has been widely applied in various fields such as deep learning [6], machine learning [7], decision support [8], pattern recognition [9], and so on. Recently, Guo et al. [10] proposed a three-way decision evaluation model that focuses on change and offers an effective tool in the field of three-way decisions. Xu et al. [11] improved and applied a rough set model of three-way decision-making for decision analysis. The original R S T does not take into account attributes within a preference-ordered domain. However, in many real situations, the significance of attribute ordering is evident, as it can directly impact the outcome of practical problems. Consequently, numerous researchers have investigated rough set models based on dominance relations. For instance, Zhang et al. [12] developed a comprehensive multigranulation neighborhood dominance rough set model based on intuitionistic fuzzy neighborhood dominance relation. In addition, Zhang et al. [13] also put forward adjustable-perspective dominance relations by combining three distinct dominance relations with an intuitionistic fuzzy-ordered information system. Under intuitionistic fuzzy environments, Huang et al. [14] established a dominance-based rough set model in intuitionistic fuzzy information systems. In recent years, the dominance-based rough set model has still been a significant research topic for scholars.
In 2010, Torra [15] introduced the hesitant fuzzy set (HFS) to effectively capture the uncertainty and ambiguity in complex problems. Since then, HFS has rapidly developed in both theory and application [16,17,18,19,20,21,22]. Meanwhile, the combination of HFS with RST in hybrid models is a noteworthy research field, and many scholars have achieved some outstanding results in this area. For example, Yang et al. [23] proposed a hesitant fuzzy rough set model and its axiomatic characterization. In 2017, Zhang et al. [24] designed a decision-making framework using a hesitant fuzzy rough set model over two universes and studied its applications. Another notable contribution of Zhang et al. [25] is the establishment of a general framework for studying dual hesitant fuzzy rough sets. It integrates a rough set with dual hesitant fuzzy set theory. Additionally, Liang and Liu [26] presented a hesitant fuzzy decision theory rough set (DTRS) model by introducing the loss function of the DTRS model with hesitant fuzzy values and explored a risk decision-making method. Analogously, Li and Huang [27] utilized hesitant fuzzy values to characterize cost as well as revenue functions in investment decision problems and further constructed a hesitant fuzzy three-way investment decision model. Additionally, Liang et al. [28] fused DTRS with hesitant fuzzy information systems (HFISs) and further designed a decision-making procedure in HFISs.
A significant application of RST is attribute reduction; the essence of reduction is to eliminate redundant attributes and simplify datasets. Various attribute reduction methods have been proposed by numerous scholars. For instance, Mi et al. [29] developed an attribute reduction method based on classification closeness degree. In addition, Zhang et al. [30] advanced some methods to extract effective decision rules from information systems. Recently, Zhong et al. [31] put forward a generalized unsupervised mixed-attribute reduction model based on fuzzy rough sets. Chen and Zhu [32] proposed attribute reduction methods in light of a variable precision multigranulation rough set model. However, most of the current research focuses on traditional fuzzy logic systems and rough set theory, while the research on attribute reduction method in HFISs is extremely limited. This leads to the fact that existing methods are unable to efficiently perform attribute reduction when facing a large amount of hesitant fuzzy information. Given this situation, this study aims to construct a dominance-based hesitant fuzzy rough set model and design an attribute reduction approach in HFIS.
The remainder of the paper is organized as follows. Section 2 briefly reviews some basic concepts related to RST and HFIS. In Section 3, a hesitant fuzzy dominance-based rough set model is established, and an attribute reduction method is designed in a dominance-based hesitant fuzzy information system (DHFIS). Section 4 constructs an RST model and develops an attribute reduction method in dominance-based hesitant fuzzy decision information systems (DHFDISs). Section 5 summarizes the contributions and indicates future research direction.

2. Fundamental Concepts

In this section, we review some fundamental properties about HFSs, RST, and attribute reduction in information systems.
First of all, we introduce the definition of information systems.
Definition 1
([18]). A quadruple S = ( U , C , V , f ) is called an information system, in which U represents the universe of discourse, C stands for the attribute set, V c is the set of evaluation values for all objects under attribute c, called the domain of attribute c, and V = { V c : c C } is the collection of all attribute domains. The information function f : U × C V maps each object–attribute pair ( x , c ) to a value f ( x , c ) V c . Further, f ( x , c ) (also denoted f c ( x ) ) represents the evaluation value of x under attribute c.
Classic rough set theory is based on equivalence relations and describes the roughness of a set by constructing upper and lower approximation sets.
Definition 2
([1]). Let S = ( U , C , V , f ) be an information system; here, R C = { ( x , y ) U × U : f ( x , c ) = f ( y , c ) , c C } is called an indiscernibility relation under C on U.
Obviously, R C satisfies reflexivity, symmetry, and transitivity, so it is an equivalence relation. The equivalence class of x ( x U ) is defined as [ x ] C = { y U : ( x , y ) R C } . Moreover, if B C , then R C R B , R C = c C R { c } , and [ x ] C [ x ] B .
Definition 3
([1]). Let S = ( U , C , V , f ) be an information system. For any B C and X U , the R B -lower approximation and R B -upper approximation of X are defined as:
R B ̲ ( X ) = { x U : [ x ] B X } , R B ¯ ( X ) = { x U : [ x ] B X Ø } .
The set B N D R B ( X ) = R B ¯ ( X ) R B ̲ ( X ) is the R B -boundary domain of X, P O S R B ( X ) = R B ̲ ( X ) is called the R B -positive domain of X, and N E G R B ( X ) = U R B ¯ ( X ) is termed as the R B -negative domain of X. Moreover, the set X is a rough set if and only if B N D R B ( X ) Ø ; otherwise, X is a definable set.
As a main branch of rough set theory, attribute reduction aims to eliminate redundant attributes from an information system without affecting its classification ability. In the following, we introduce the attribute reduction of an information system.
Definition 4
([1]). Let S = ( U , C , V , f ) be an information system and B C . If R B = R C , then B is called a consistent set. Furthermore, if B is a consistent set and R A R C for any A B , then B is referred to as an attribute reduction of S.
To further explore attribute reduction in uncertain environments, we introduce hesitant fuzzy sets as follows.
Definition 5
([15]). Let U be a universe of discourse. A hesitant fuzzy set (HFS) A on U is expressed as A = { < x , h A ( x ) > : x U } , where h A ( x ) is a set of values in [0,1], representing the possible membership degrees of x U to A, and h A ( x ) is called a hesitant fuzzy element (HFE).
Because an HFS can intuitively capture the ambiguity and uncertainty of information, researchers have introduced its idea into a classical information system to establish a hesitant fuzzy information system (HFIS).
Definition 6
([18]). For an information system S = ( U , C , V , f ) , if f ( x , c ) V c is an HFE, then S is called an HFIS.
If an HFIS S = ( U , C { d } , V , f ) satisfies C { d } = Ø , then it is called a hesitant fuzzy decision information system (HFDIS), where C is the conditional attribute set, and d is a decision attribute [18].
To objectively compare two HFEs, Xia and Xu designed the following comparison rules.
Definition 7
([18]). Given an HFE h = { γ 1 , γ 2 , , γ n } , the score of h is defined as s ( h ) = i = 1 n γ i n . For two HFEs h 1 and h 2 , if s ( h 1 ) < s ( h 2 ) , then h 1 < h 2 , and if s ( h 1 ) = s ( h 2 ) , then h 1 = h 2 .
For example, suppose two we have HFEs h 1 = { 0.7 , 0.3 , 0.2 } and h 2 = { 0.3 , 0.5 } ; then, we have s ( h 1 ) = s ( h 2 ) = 0.4 , then h 1 = h 2 . However, it is obvious that h 2 is more stable than h 1 because its objects focus on the score better; therefore, it is insufficient to compare HFEs depending only on their scores. To address this limitation, we present a novel method for comparing HFEs in the following.
Definition 8.
For an HFE h = { γ 1 , γ 2 , , γ n } , the deviation of h is defined as v ( h ) = i = 1 n [ γ i s ( h ) ] 2 n ; then, for two HFEs h 1 and h 2 :
(1) If s ( h 1 ) < s ( h 2 ) , then h 2 is superior to h 1 , denoted by h 1 h 2 ;
(2) If s ( h 1 ) = s ( h 2 ) , then:
      h 1 is superior to h 2 if v ( h 1 ) < v ( h 2 ) , denoted by h 2 h 1 ,
      h 2 is equivalent to h 1 if v ( h 1 ) = v ( h 2 ) , denoted by h 1 h 2 .
Given an HFIS S = ( U , C , V , f ) , we denote f c ( x ) f c ( y ) f c ( x ) f c ( y ) f c ( x ) f c ( y ) , which means that y dominates x with respect to c.
Furthermore, we define a dominance relation on U with the help of comparison method on HFEs.
Definition 9.
Suppose S = ( U , C , V , f ) is an HFIS; the dominance relation R C in S is defined as:
R C = { ( x , y ) U × U : f c ( x ) f c ( y ) , c C } .
According to the definition above, x R C y means that y dominates x with respect to all attributes in C. Additionally, the dominating class of x induced by R C is defined as [ x ] C = { y U : ( x , y ) R C } , which is comprised of objects dominating x under C. Additionally, the dominated class of x is defined as [ x ] C = { y U : ( y , x ) R C } .
As noted in reference [14], a criterion is an attribute that can exhibit a total order; i.e., the evaluation values of all objects under the attribute can be compared. Moreover, if all attributes in an information system are criteria, then the information system is a dominance information system. Consequently, a hesitant fuzzy information system can be regarded as a dominance-based hesitant fuzzy information system (DHFIS) after introducing R C , since any two HFEs can be compared by Definition 7. In the following, we conduct all our studies on DHFISs.
Proposition 1.
For a DHFIS S = ( U , C , V , f ) and B C , the following properties hold:
(1) R C is reflexive, transitive;
(2) R C = c C R { c } ;
(3) R C R B , [ x ] C [ x ] B ;
(4) If y [ x ] B , then [ y ] B [ x ] B , and [ x ] B = { [ y ] B : y [ x ] B } ;
(5) [ x ] B = [ y ] B if and only if f b ( x ) = f b ( y ) for all b B .
Proof. 
(1) For any x U , it is obvious that f c ( x ) = f c ( x ) for any c C , which implies ( x , x ) R C . Thus, R C is reflexive. In addition, suppose there exist x , y , z U such that x R C y and y R C z , which means that f c ( x ) f c ( y ) and f c ( y ) f c ( z ) for any c C ; thus, we obtain f c ( x ) f c ( z ) , i.e., x R C z . So, R C is transitive.
(2) For any ( x , y ) c C R { c } , it is easy to know that ( x , y ) R { c } for any c C . This implies ( x , y ) R C ; thus, c C R { c } R C . Conversely, for any ( x , y ) R C , we have ( x , y ) R { c } for any c C ; that is, ( x , y ) c C R { c } , and therefore R C c C R { c } .
(3) For any ( x , y ) R C and B C , we can obtain f b ( x ) f b ( y ) for any b B , which implies that ( x , y ) R B . Therefore, R C R B .
If y [ x ] C , then f c ( x ) f c ( y ) for any c C . Since B C , it follows that y [ x ] B . Therefore, [ x ] C [ x ] B .
(4) If y [ x ] B , then f b ( x ) f b ( y ) for any b B . Likewise, for any e [ y ] B , f b ( y ) f b ( e ) holds for any b B . Then, f b ( x ) f b ( e ) , which means e [ x ] B , so [ y ] B [ x ] B is obtained. Additionally, { [ y ] B : y [ x ] B } [ x ] B is obvious. For any z [ x ] B , let y = z ; then, y [ x ] B and z [ y ] B , so [ x ] B { [ y ] B : y [ x ] B } holds. Therefore, [ x ] B = { [ y ] B : y [ x ] B } is obtained.
(5) We first prove [ x ] B = [ y ] B implies f b ( x ) = f b ( y ) for all b B . If [ x ] B = [ y ] B , then [ x ] B [ y ] B , so x [ y ] B ; that is, f b ( y ) f b ( x ) for all b B . Similarly, it follows that f b ( x ) f b ( y ) . Hence, f b ( x ) = f b ( y ) holds for all b B .
Next, we prove f b ( x ) = f b ( y ) implies [ x ] B = [ y ] B for all b B . Since f b ( x ) = f b ( y ) for any b B , it is easy to find that [ x ] B = [ y ] B .
For clarity, an example is given to illustrate how to identify a DHFIS and calculate dominance classes of objects. □
Example 1.
In a class, there are eight students { x 1 , x 2 , , x 8 } running for the position of class leader. To ensure a fair evaluation of the best candidate, three experts are invited to assess these students. Each expert evaluates the students on a scale of 0 to 1 based on five attributes: academic performance ( c 1 ) , leadership ability ( c 2 ) , organizational skills ( c 3 ) , communication skills ( c 4 ) , and innovative ability ( c 5 ) . The experts evaluate student x 1 under c 1 as 0.5, 0.4, and 0.3, and the evaluation values are denoted by an HFE { 0.5 , 0.4 , 0.3 } . Therefore, we can acquire an HFIS as shown in Table 1. For instance, based on these scores, it can be obtained that x 6 R { c 1 } x 2 R { c 1 } x 7 R { c 1 } x 1 R { c 1 } x 3 R { c 1 } x 8 R { c 1 } x 5 R { c 1 } x 4 ; thus, c 1 is a criterion. Analogously, all attributes can be proven to be criteria, so the HFIS is a DHFIS. Furthermore, the dominated and dominating classes of x U can be calculated, and the results are illustrated in Table 2.

3. Rough Set Model and Attribute Reduction of DHFIS

In this part, we construct a rough set model in DHFIS based on the dominance relation and design the approximation reduction method.
Definition 10.
For a DHFIS S = ( U , C , V , f ) , given X U and B C , the lower approximation and upper approximation of X with respect to R B are defined as follows:
R B ̲ ( X ) = { x U : [ x ] B X } ,
R B ¯ ( X ) = { x U : [ x ] B X Ø } .
The rough set model is based on the dominance relation and is constructed by considering the inclusion relationships between the object set and the superiority classes of each object.
Similar to the calculation of classical rough sets, the following properties can be proved.
Proposition 2.
For a DHFIS S = ( U , C , V , f ) and X , Y U , then:
(1) R A ̲ ( X ) X R A ¯ ( X ) ;
(2) R A ̲ ( Ø ) = R A ¯ ( Ø ) = Ø , R A ̲ ( U ) = R A ¯ ( U ) = U ;
(3) If A C , then R A ̲ ( X ) R C ̲ ( X ) and R C ¯ ( X ) R A ¯ ( X ) ;
(4) If X Y , then R A ̲ ( X ) R A ̲ ( Y ) and R A ¯ ( X ) R A ¯ ( Y ) ;
(5) R A ̲ ( X ) = R A ¯ ( X ) , R A ¯ ( X ) = R A ̲ ( X ) ( X stands for the complement of X);
(6) R A ̲ ( X ) R A ̲ ( Y ) R A ̲ ( X Y ) , R A ¯ ( X Y ) R A ¯ ( X ) R A ¯ ( Y ) ;
(7) R A ̲ ( X Y ) = R A ̲ ( X ) R A ̲ ( Y ) , R A ¯ ( X Y ) = R A ¯ ( X ) R A ¯ ( Y ) ;
(8) R A ̲ ( R A ̲ ( X ) ) = R A ̲ ( X ) , R A ¯ ( R A ¯ ( X ) ) = R A ¯ ( X ) .
Proof. 
(1) For any x R A ̲ ( X ) , it is clear that [ x ] A X . Since x [ x ] A , then we obtain x X ; therefore, R A ̲ ( X ) X . Moreover, suppose x X ; then, [ x ] A X Ø , and we further have x R A ¯ ( X ) . Thus, X R A ¯ ( X ) .
(2) According to (1), we have R A ̲ ( Ø ) Ø . Since Ø R A ̲ ( Ø ) is obvious, R A ̲ ( Ø ) = Ø is obtained. In addition, suppose R A ¯ ( Ø ) Ø ; then, there exists x R A ¯ ( Ø ) , that is, [ x ] A Ø Ø , which contradicts [ x ] A Ø = Ø . Therefore, R A ¯ ( Ø ) = Ø holds.
Similarly, R A ̲ ( U ) = R A ¯ ( U ) = U can be proved.
(3) If x R A ̲ ( X ) , then [ x ] A X . Moreover, for A C , we have [ x ] C [ x ] A , and then x R C ̲ ( X ) ; thus, R A ̲ ( X ) R C ̲ ( X ) . On the other side, for any x R C ¯ ( X ) , we have [ x ] C X Ø , and [ x ] C [ x ] A is obvious. In this case, x R A ¯ ( X ) holds. Therefore, R C ¯ ( X ) R A ¯ ( X ) .
(4) If x R A ̲ ( X ) , then [ x ] A X . Furthermore, since X Y , we obtain [ x ] A Y . Therefore, x R A ̲ ( Y ) , leading to the conclusion that R A ̲ ( X ) R A ̲ ( Y ) .
Similarly, for any x R A ¯ ( X ) , we have [ x ] A X Ø . Moreover, because X Y , we have [ x ] A Y Ø . Hence, x R A ̲ ( Y ) , which implies that R A ¯ ( X ) R A ¯ ( Y ) .
(5) If x R A ̲ ( X ) , then [ x ] A X . Therefore, [ x ] A ( X ) Ø , which means that x R A ¯ ( X ) , so R A ̲ ( X ) R A ¯ ( X ) . Analogously, for any x R A ¯ ( X ) , we have [ x ] A ( X ) Ø , which implies that [ x ] A X . Consequently, x R A ̲ ( X ) ; that is, R A ¯ ( X ) R A ̲ ( X ) . Based on the above analysis, R A ̲ ( X ) = R A ¯ ( X ) holds.
Analogously, R A ¯ ( X ) = R A ̲ ( X ) can be proved.
(6) If x R A ̲ ( X ) R A ̲ ( Y ) , then [ x ] A X or [ x ] A Y , which means that [ x ] A X Y . Then, x R A ̲ ( X Y ) holds; consequently, R A ̲ ( X ) R A ̲ ( Y ) R A ̲ ( X Y ) .
Furthermore, since x R A ¯ ( X Y ) , it is evident that [ x ] A ( X Y ) Ø . That is, [ x ] A X Ø and [ x ] A Y Ø . Therefore, x R A ¯ ( X ) R A ¯ ( Y ) , and then we have R A ¯ ( X Y ) R A ¯ ( X ) R A ¯ ( Y ) .
(7) Assume x R A ̲ ( X Y ) ; then, [ x ] A X Y , which means that [ x ] A X and [ x ] A Y . So, x R A ̲ ( X ) R A ̲ ( Y ) , and therefore, R A ̲ ( X Y ) R A ̲ ( X ) R A ̲ ( Y ) holds. Conversely, if x R A ̲ ( X ) R A ̲ ( Y ) , then x R A ̲ ( X ) and x R A ̲ ( Y ) , and thus x R A ̲ ( X Y ) . In view of this, R A ̲ ( X Y ) R A ̲ ( X ) R A ̲ ( Y ) .
In the same manner, it is easy to prove R A ¯ ( X Y ) = R A ¯ ( X ) R A ¯ ( Y ) .
(8) According to (1), we can deduce that R A ̲ ( R A ̲ ( X ) ) R A ̲ ( X ) . Furthermore, if x R A ̲ ( X ) , then [ x ] A X , and thus we have R A ̲ ( [ x ] A ) R A ̲ ( X ) . Additionally, since R A ̲ ( [ x ] A ) = [ x ] A , we obtain [ x ] A R A ̲ ( [ x ] A ) , and then x R A ̲ ( R A ̲ ( X ) ) ; that is, R A ̲ ( X ) R A ̲ ( R A ̲ ( X ) ) . Consequently, we conclude that R A ̲ ( R A ̲ ( X ) ) = R A ̲ ( X ) .
According to (1), R A ¯ ( X ) R A ¯ ( R A ¯ ( X ) ) can be derived. Moreover, if x R A ¯ ( R A ¯ ( X ) ) , then [ x ] A R A ¯ ( X ) Ø , which implies that there exists y [ x ] A R A ¯ ( X ) , and so [ y ] A X Ø . Since [ y ] A [ x ] A , then we obtain [ x ] A X Ø ; that is, x R A ¯ ( X ) , and thus R A ¯ ( R A ¯ ( X ) ) R A ¯ ( X ) . Consequently, R A ¯ ( R A ¯ ( X ) ) = R A ¯ ( X ) . □
Example 2
(Continued from Example 1). Let B = { c 1 , c 2 , c 5 } C and X = { x 2 , x 3 , x 5 , x 7 } U ; then, R B ̲ and R B ¯ can be calculated as follows.
First, we calculate the dominating and dominated classes of x with respect to R B , as illustrated in Table 3.
Next, we can obtain x 3 R B ̲ ( X ) , since [ x 3 ] B X . Similarly, x 5 R B ̲ ( X ) , and therefore R B ̲ ( X ) = { x 3 , x 5 } . Analogously, we can further derive R B ¯ ( X ) = { x 1 , x 2 , x 3 , x 5 , x 6 , x 7 , x 8 } .
In the following, we present discernibility matrices and attribute reduction methods in DHFIS based on dominance relations.
Definition 11.
For a DHFIS S = ( U , C , V , f ) , an attribute c C is essential if R C R C { c } . Otherwise, c is dispensable. The set consisting of all essential attributes is referred to as the core of C.
Definition 12.
Let S = ( U , C , V , f ) be a DHFIS; B C , if R B = R C ; and R A R C for any A B ; then, B is called an attribute reduction of S.
Attribute reduction of a DHFIS aims to extract the essential attributes while keeping the classification ability unchanged, which can simplify the information system and raise decision efficiency.
Definition 13.
For a DHFIS S = ( U , C , V , f ) , the discernibility attribute set between x and y is defined as D i s ( x , y ) = { c C : ( x , y ) R { c } } . D I S = ( D i s ( x , y ) : x , y U ) is called the discernibility matrix of S.
Proposition 3.
For a DHFIS S = ( U , C , V , f ) and B C , if D i s ( x , y ) Ø for any x , y U , then R C = R B if and only if B D i s ( x , y ) Ø .
Proof. 
If D i s ( x , y ) Ø , then there exists c C such that ( x , y ) R { c } . That is, ( x , y ) R C , so we have ( x , y ) R B according to R C = R B . Consequently, there exists b B such that ( x , y ) R { b } , so B D i s ( x , y ) Ø holds. □
According to Proposition 1, it is clear that R C R B . On the other side, for any ( x , y ) U 2 , if ( x , y ) R C and B D i s ( x , y ) Ø , then there exist c B and c D i s ( x , y ) . This means that ( x , y ) R { c } , and then we obtain ( x , y ) R B . Therefore, R B R C holds. Based on the analysis above, R C = R B .
Definition 14.
For a DHFIS S = ( U , C , V , f ) , the discernibility function of S is defined as M = ( x , y ) U 2 , D i s ( x , y ) Ø { D i s ( x , y ) } , where D i s ( x , y ) means the disjunction of elements in D i s ( x , y ) .
According to the idea of discernibility functions, if M is transformed into its minimum disjunctive normal form k = 1 t s = 1 q k c i s , then there are t attribute reductions B k ( k = 1 , 2 , , t ) and B k = c i s C : s = 1 , 2 , , q k .
Example 3
(Continued from Example 1). According to Definition 14, we can obtain the discernibility matrix of Table 1 as shown below.
Ø { c 1 , c 2 } { c 4 } Ø Ø { c 1 , c 2 } { c 1 , c 2 } { c 2 } { c 3 , c 4 , c 5 } Ø { c 4 } { c 3 } { c 3 , c 4 } { c 1 , c 3 } { c 3 } { c 3 , c 4 } { c 1 , c 2 , c 3 , c 5 } { c 1 , c 2 , c 3 , c 5 } Ø { c 2 , c 3 } { c 2 , c 3 , c 5 } { c 1 , c 2 , c 3 , c 5 } { c 1 , c 2 , c 3 } { c 2 , c 3 } { c 1 , c 2 , c 3 , c 4 , c 5 } { c 1 , c 2 , c 4 , c 5 } { c 1 , c 4 , c 5 } Ø { c 1 , c 3 , c 4 , c 5 } { c 1 , c 2 , c 3 , c 4 , c 5 } { c 1 , c 2 , c 4 , c 5 } { c 1 , c 2 , c 3 , c 4 , c 5 } { c 1 , c 2 , c 3 , c 4 , c 5 } { c 1 , c 2 , c 5 } { c 1 , c 4 } { c 2 } Ø { c 1 , c 2 , c 3 , c 5 } { c 1 , c 2 } { c 1 , c 2 , c 3 } { c 3 , c 4 , c 5 } { c 2 , c 4 , c 5 } { c 4 } Ø { c 4 } Ø Ø { c 4 } { c 3 , c 4 , c 5 } { c 1 , c 2 , c 4 , c 5 } { c 4 , c 5 } { c 3 } { c 3 , c 4 , c 5 } { c 1 , c 2 , c 3 , c 4 , c 5 } Ø { c 2 , c 3 , c 4 , c 5 } { c 1 , c 3 , c 4 , c 5 } { c 1 , c 2 , c 5 } { c 1 , c 4 , c 5 } Ø { c 4 , c 5 } { c 1 , c 2 , c 3 , c 5 } { c 1 } Ø
Therefore, the attribute reduction of Table 1 can be obtained as follows. M = { c 3 c 4 } { c 1 c 2 c 3 c 5 } { c 1 c 2 c 5 } { c 2 c 4 c 5 } { c 4 } { c 2 } { c 3 } { c 2 c 3 c 5 } { c 1 c 2 } { c 1 c 3 } { c 1 c 2 } { c 1 } = c 1 c 2 c 3 c 4 . In summary, the only attribute reduction of Table 1 is { c 1 , c 2 , c 3 , c 4 } , which indicates that c 1 , c 2 , c 3 , and c 4 are essential attributes.

4. Rough Set Model and Attribute Reduction of DHFDIS

Suppose S = ( U , C { d } , V , f ) is a DHFDIS; then, the decision classes U / { d } = { D 1 , D 2 , , D r } are ordered. That is, if j i , the objects in D j are superior to objects in D i . D i + = j i D j and D i = j i D j are referred to as the upward and downward unions of D i .
Definition 15.
For a DHFDIS S = ( U , C { d } , V , f ) , the lower approximation and upper approximation of D i + and D i with respect to R C are defined as follows.
R C ̲ ( D i + ) = { x U : [ x ] C D i + } , R C ¯ ( D i + ) = { x U : [ x ] C D i + Ø } .
R C ̲ ( D i ) = { x U : [ x ] C D i } R C ¯ ( D i ) = { x U : [ x ] C D i Ø } .
The rough set model is also based on the dominance relation and is constructed by considering the inclusion relationships between the upward and downward unions of the decision class and the dominance classes of all objects. The upper and lower unions of decision classes can precisely delineate the boundaries of decision classes. Therefore, the rough set model can not only adapt to different decision level requirements, but also can flexibly deal with uncertainty and fuzzy data.
Example 4.
Given a DHFDIS by adding a decision attribute d to Table 1, as shown in Table 4, d is the ranking of the eight students. Apparently, D 1 = { x 3 } , D 2 = { x 1 , x 8 } , D 3 = { x 4 , x 6 , x 7 } , and D 4 = { x 2 , x 5 } . Let B = { c 1 , c 2 , c 5 } C ; then, we calculate the lower approximation and upper approximation of D 2 + and D 3 + with respect to R B as follows.
According to Definition 15, it is easy to obtain D 2 + = { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 , x 8 } and D 2 = { x 1 , x 3 , x 8 } . By calculation, we can obtain R B ̲ ( D 2 + ) = { x 4 , x 5 , x 7 , x 8 } , R B ¯ ( D 2 + ) = { x 1 , x 2 , x 4 , x 5 , x 6 , x 7 , x 8 } , R B ̲ ( D 2 ) = { x 3 } , and R B ¯ ( D 2 ) = { x 1 , x 2 , x 3 , x 6 , x 8 } .
Definition 16.
Let S = ( U , C { d } , V , f ) be a DHFDIS if R C R { d } , where R { d } = { ( x , y ) U 2 | f ( x , d ) f ( y , d ) } ; then, S is termed a ⪯-consistent DHFDIS.
Definition 17.
Let S = ( U , C { d } , V , f ) be a ⪯-consistent DHFDIS, B C if R B R { d } , and R A R { d } for any A B ; then, B is called a ⪯-consistent reduction of S.
Definition 18.
In a ⪯-consistent DHFDIS, lower and upper approximation discernibility attribute sets between x and y are defined as follows.
α ̲ ( x , y ) = c C : ( x , y ) R { c } , f ( x , d ) < f ( y , d ) , , f ( x , d ) f ( y , d ) .
α ¯ ( x , y ) = c C : ( x , y ) R { c } , f ( x , d ) > f ( y , d ) , , f ( x , d ) f ( y , d ) .
Furthermore, M ̲ = ( x , y ) U 2 , D i s ( x , y ) Ø { α ̲ ( x , y ) } and M ¯ = ( x , y ) U 2 , D i s ( x , y ) Ø { α ¯ ( x , y ) } are called the lower and upper approximation discernibility functions of the -consistent DHFDIS. According to the idea of discernibility functions, if M ̲ is transformed into its minimum disjunctive normal form k = 1 t s = 1 q k c i s , then there are t lower approximation reductions { B k : k = 1 , 2 , , t } and B k = c i s C : s = 1 , 2 , , q k . Similarly, if M ¯ is transformed into its minimum disjunctive normal form k = 1 t s = 1 q k c i s , then there are t upper approximation reductions { B k : k = 1 , 2 , , t } and B k = c i s C : s = 1 , 2 , , q k .
In what follows, we utilize a numerical example to illustrate the calculation of discernibility matrices and reductions of a DHFDIS.
Example 5
(Continued from Example 4). According to Definition 16, it is apparent that R C R { d } ; thus, Table 4 is a ⪯-consistent DHFDIS. Furthermore, we can calculate the lower and upper approximation discernibility matrices of Table 4, denoted by L D and U D .
L D = Ø { c 1 , c 2 } Ø Ø Ø { c 1 , c 2 } { c 1 , c 2 } Ø Ø Ø Ø Ø Ø Ø Ø Ø { c 1 , c 2 , c 3 , c 5 } { c 1 , c 2 , c 3 , c 5 } Ø { c 2 , c 3 } { c 2 , c 3 , c 5 } { c 1 , c 2 , c 3 , c 5 } { c 1 , c 2 , c 3 } { c 2 , c 3 } Ø { c 1 , c 2 , c 4 , c 5 } Ø Ø { c 1 , c 3 , c 4 , c 5 } Ø Ø Ø Ø Ø Ø Ø Ø Ø Ø Ø Ø { c 2 , c 4 , c 5 } Ø Ø { c 4 } Ø Ø Ø Ø { c 1 , c 2 , c 4 , c 5 } Ø Ø { c 3 , c 4 , c 5 } Ø Ø Ø Ø { c 1 , c 2 , c 5 } Ø Ø { c 4 , c 5 } { c 1 , c 2 , c 3 , c 5 } { c 1 } Ø
U D = Ø Ø { c 4 } Ø Ø Ø Ø Ø { c 3 , c 4 , c 5 } Ø { c 4 } { c 3 } Ø { c 1 , c 3 } { c 3 } { c 3 , c 4 } Ø Ø Ø Ø Ø Ø Ø Ø { c 1 , c 2 , c 3 , c 4 , c 5 } Ø { c 1 , c 4 , c 5 } Ø Ø Ø Ø { c 1 , c 2 , c 3 , c 4 , c 5 } { c 1 , c 2 , c 3 , c 4 , c 5 } Ø { c 1 , c 4 } { c 2 } Ø { c 1 , c 2 , c 3 , c 5 } { c 1 , c 2 } { c 1 , c 2 , c 3 } { c 3 , c 4 , c 5 } Ø { c 4 } Ø Ø Ø Ø { c 4 } { c 3 , c 4 , c 5 } Ø { c 4 , c 5 } Ø Ø Ø Ø { c 2 , c 3 , c 4 , c 5 } Ø Ø { c 1 , c 4 , c 5 } Ø Ø Ø Ø Ø
Moreover, the lower and upper approximation reductions of S can be obtained as below.
M ̲ = { c 1 c 2 } { c 2 c 3 } { c 4 } { c 1 } = { c 1 c 2 c 4 } { c 1 c 3 c 4 } ,
M ¯ = { c 3 c 4 } { c 4 } { c 3 } { c 1 c 3 } { c 3 c 4 } = { c 2 c 3 c 4 } .
Therefore, the lower approximation reductions are { c 1 , c 2 , c 4 } or { c 1 , c 3 , c 4 } , and the upper approximation reductions are { c 2 , c 3 , c 4 } . Next, we present a novel approach to extract the lower and upper approximation rules.
Definition 19.
Let S = ( U , C { d } , V , f ) be a ⪯-consistent DHFDIS for any x U , where the lower and upper approximation discernibility functions of x are defined as h ̲ ( x ) = y U ( α ̲ ( x , y ) ) and h ¯ ( x ) = y U ( α ¯ ( x , y ) ) , and where α ̲ ( x , y ) refers to the disjunction of the elements in α ̲ ( x , y ) .
In classical RST, rule extraction plays a crucial role in aiding decision-making and takes precedence over rule reduction in certain situations. Specifically, the lower and upper approximation discernibility functions for a given object can help us determine which combinations of attributes can be derived as the lower and upper decision rules. An example is given to illustrate the approach of extracting rules.
Example 6
(Continued from Example 5). Compute the lower and upper approximation discernibility functions of objects in Table 4 as below.
h ̲ ( x 1 ) = c 1 c 2 , h ¯ ( x 1 ) = c 4 , h ̲ ( x 2 ) = Ø , h ¯ ( x 2 ) = c 3 c 4 , h ̲ ( x 3 ) = c 2 c 3 , h ¯ ( x 3 ) = Ø , h ̲ ( x 4 ) = ( c 1 c 2 c 4 c 5 ) ( c 1 c 3 c 4 c 5 ) , h ¯ ( x 4 ) = c 1 c 4 c 5 , h ̲ ( x 5 ) = Ø , h ¯ ( x 5 ) = c 2 ( c 1 c 4 ) , h ̲ ( x 6 ) = c 4 , h ¯ ( x 6 ) = c 4 , h ̲ ( x 7 ) = ( c 1 c 2 c 4 c 5 ) ( c 3 c 4 c 5 ) , h ¯ ( x 7 ) = c 4 c 5 , h ̲ ( x 8 ) = c 1 ( c 4 c 5 ) , h ¯ ( x 8 ) = c 1 c 4 c 5 ,
Since S is consistent, some lower and upper approximation decision rules can be derived from the lower and upper approximation discernibility functions of objects. Take h ̲ ( x 1 ) = c 1 c 2 for example. If the evaluation value of an object x under attribute c 1 satisfies f ( x , c 1 ) f ( x 1 , c 1 ) = { 0.5 , 0.4 , 0.3 } or the evaluation value of an object x under attribute c 2 satisfies f ( x , c 2 ) f ( x 1 , c 2 ) = { 0.4 , 0.5 , 0.6 } , then the evaluation value of an object x under decision attribute d satisfies f ( x , d ) f ( x 1 , d ) = 2 . On the other side, take h ¯ ( x 2 ) = c 3 c 4 for example. It can be obtained that f ( x , d ) f ( x 2 , d ) = 4 when f ( x , c 4 ) f ( x 2 , c 3 ) = { 0.7 , 0.6 , 0.5 } and f ( x , c 4 ) f ( x 2 , c 3 ) = { 0.8 , 0.7 , 0.3 } . These two rules are based on the lower and upper discernibility functions of x 1 and x 2 , so they are supported by x 1 and x 2 . Furthermore, we can obtain the following decision rules.
f ( x , c 2 ) { 0.4 , 0.5 , 0.6 } f ( x , d ) 2 (supported by x 1 ),
f ( x , c 2 ) { 0.6 , 0.7 , 0.8 } f ( x , c 3 ) { 0.9 , 0.7 } f ( x , d ) 1 (supported by x 3 ),
f ( x , c 1 ) { 0.5 , 0.6 , 0.7 } f ( x , c 3 ) { 0.9 , 0.7 , 0.2 } f ( x , d ) 3 (supported by x 4 ),
f ( x , c 4 ) { 0.8 , 0.6 , 0.4 } f ( x , d ) 3 (supported by x 6 ),
f ( x , c 2 ) { 0.6 , 0.6 , 0.3 } f ( x , c 4 ) { 0.7 , 0.7 , 0.4 } f ( x , d ) 3 (supported by x 7 ),
f ( x , c 1 ) { 0.7 , 0.5 , 0.3 } f ( x , c 4 ) { 0.9 , 0.6 , 0.3 } f ( x , d ) 2 (supported by x 8 ).
f ( x , c 4 ) { 0.7 , 0.4 , 0.1 } f ( x , d ) 2 (supported by x 1 ),
f ( x , c 3 ) { 0.5 , 0.6 , 0.7 } f ( x , c 4 ) { 0.3 , 0.7 , 0.8 } f ( x , d ) 4 (supported by x 2 ),
f ( x , c 1 ) { 0.5 , 0.6 , 0.7 } f ( x , c 4 ) { 0.6 , 0.8 } f ( x , c 5 ) { 0.9 , 0.7 , 0.6 , 0.6 } f ( x , d ) 3 (supported by x 4 ),
f ( x , c 2 ) { 0.9 , 0.7 , 0.5 } f ( x , c 4 ) { 0.5 , 0.4 , 0.3 } f ( x , d ) 4 (supported by x 5 ),
f ( x , c 4 ) { 0.4 , 0.6 , 0.8 } f ( x , d ) 3 (supported by x 6 ),
f ( x , c 4 ) { 0.7 , 0.7 , 0.4 } f ( x , c 5 ) { 0.7 , 0.6 , 0.5 } f ( x , d ) 3 (supported by x 7 ),
f ( x , c 1 ) { 0.7 , 0.5 , 0.3 } f ( x , c 4 ) { 0.3 , 0.6 , 0.9 } f ( x , c 5 ) { 0.4 , 0.5 , 0.9 } f ( x , d ) 2 (supported by x 8 ).

5. Conclusions

In this study, we introduce the deviation of HFEs, which provides a more reliable sorting method. Furthermore, a dominance relation is defined based on the score and deviation of HFEs. Subsequently, we successfully develop a dominance-based hesitant fuzzy rough set model and an attribute reduction method. In many decision-making problems, dominance relationships must be considered, so our work provides a new insight for hesitant fuzzy decision analysis. In addition, a novel attribute reduction approach in a DHFDIS is also proposed for extracting decision rules. In fact, the decision-making rules can be applied to various practical problems, such as medical diagnosis, financial risk assessment, logistics optimization, and other fields. In future research, we will focus on extending the method to interval-valued fuzzy environments and conducting practical case studies, and then further apply it to various fields.

Author Contributions

Y.B. was involved in conceptualization, writing, supervision and discussion. S.C. was involved in methodology, paper organization, writing, discussion; All authors attest that they meet the criteria for authorship. All authors have read and agreed to the published version of the manuscript.

Funding

This work is sponsored by Natural Science Foundation of Xinjiang Uygur Autonomous Region (No. 2023D01C03).

Data Availability Statement

Inquiries about data availability should be directed to the authors.

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Pawlak, Z. Rough sets. Int. J. Inf. Comput. Sci. 1982, 11, 145–172. [Google Scholar] [CrossRef]
  2. Hu, Q.H.; Yu, D.R.; Liu, J.F.; Wu, C.X. Neighborhood rough set based heterogeneous feature subset selection. Inf. Sci. 2008, 178, 3577–3594. [Google Scholar] [CrossRef]
  3. Ren, J.; Zhu, P. Uncertainty measures in fuzzy set-valued information systems based on fuzzy β-neighborhood similarity relations. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2023, 31, 585–618. [Google Scholar] [CrossRef]
  4. Nanda, S.; Majumda, S. Fuzzy rough sets. Fuzzy Sets Syst. 1992, 45, 157–160. [Google Scholar] [CrossRef]
  5. Cornelis, C.; Cock, M.D.; Kerre, E.E. Intuitionistic fuzzy rough sets: At the crossroads of imperfect knowledge. Expert Syst. 2003, 20, 260–270. [Google Scholar] [CrossRef]
  6. Tishya, M.; Anitha, A. Precipitation prediction by integrating rough set on fuzzy approximation space with deep learning techniques. Appl. Soft Comput. 2023, 139, 110253. [Google Scholar]
  7. Andrzej, J.; Zalewska, A.; Wawrowski, L.; Biczyk, P.; Ludziejewski, J.; Sikora, M.; Slezak, D. BrightBox—A rough set based technology for diagnosing mistakes of machine learning models. Appl. Soft Comput. 2023, 141, 110285. [Google Scholar]
  8. Zhu, H.H.; Liu, C.C.; Zhang, Y.; Shi, W. A rule-based decision support method combining variable precision rough set and stochastic multi-objective acceptability analysis for multi-attribute decision-making. Math. Probl. Eng. 2022, 2022, 2876344. [Google Scholar] [CrossRef]
  9. Li, Q.B.; Wei, Y.; Li, W.J. Method for fine pattern recognition of space targets using the entropy weight fuzzy-rough nearest neighbor algorithm. J. Appl. Spectrosc. 2021, 87, 1018–1022. [Google Scholar] [CrossRef]
  10. Guo, D.D.; Jiang, C.M.; Sheng, R.X.; Liu, S.S. A novel outcome evaluation model of three-way decision: A change viewpoint. Inf. Sci. 2022, 607, 1089–1110. [Google Scholar] [CrossRef]
  11. Xu, W.H.; Kong, Q.Z.; Zhang, X.W.; Long, B.H. A novel granular computing model based on three-way decision. Int. J. Approx. Reason. 2022, 144, 92–112. [Google Scholar]
  12. Zhang, X.Y.; Hou, J.L.; Li, J.R. Multigranulation rough set methods and applications based on neighborhood dominance relation in intuitionistic fuzzy datasets. Int. J. Fuzzy Syst. 2022, 24, 3602–3625. [Google Scholar] [CrossRef]
  13. Zhang, X.Y.; Hou, J.L. A novel rough set method based on adjustable-perspective dominance relations in intuitionistic fuzzy ordered decision tables. Int. J. Approx. Reason. 2023, 154, 218–241. [Google Scholar] [CrossRef]
  14. Huang, B.; Li, H.X.; Wei, D.K. Dominance-based rough set model in intuitionistic fuzzy information systems. Knowl.-Based Syst. 2012, 28, 115–123. [Google Scholar] [CrossRef]
  15. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  16. Shen, Q.; Lou, J.G.; Liu, Y.; Jiang, Y.L. Hesitant fuzzy multi-attribute decision making based on binary connection number of set pair analysis. Soft Comput. 2021, 25, 11. [Google Scholar] [CrossRef]
  17. Sun, G.D.; Guan, X.; Yi, X.; Zhou, Z. Grey relational analysis between hesitant fuzzy sets with applications to pattern recognition. Expert Syst. Appl. 2018, 92, 521–532. [Google Scholar] [CrossRef]
  18. Xia, M.M.; Xu, Z.S. Hesitant fuzzy information aggregation in decision making. Int. J. Approx. Reason. 2011, 52, 395–407. [Google Scholar] [CrossRef]
  19. Xu, Z.S.; Xia, M.M. Distance and similarity measures for hesitant fuzzy sets. Inf. Sci. 2011, 181, 2128–2138. [Google Scholar] [CrossRef]
  20. Yu, D.J. Archimedean aggregation operators based on dual hesitant fuzzy set and their application to GDM, International Journal of Uncertainty. Fuzziness Knowl.-Based Syst. 2015, 23, 761–781. [Google Scholar] [CrossRef]
  21. Zhang, Z.M.; Wang, C.; Tian, X.D. A Consensus model for group decision making with hesitant fuzzy information. Int. J. Uncertainty, Fuzziness Knowl.-Based Syst. 2015, 23, 459–480. [Google Scholar] [CrossRef]
  22. Zhou, W.; Liu, M.; Xu, Z.S.; Enrique, H.V. Global fusion of multiple order relations and hesitant fuzzy decision analysis. Appl. Intell. 2021, 52, 1–23. [Google Scholar] [CrossRef]
  23. Yang, X.B.; Song, X.N.; Qi, Y.S.; Yang, J.Y. Constructive and axiomatic approaches to hesitant fuzzy rough set. Soft Comput. 2014, 18, 1067–1077. [Google Scholar] [CrossRef]
  24. Zhang, H.D.; Shu, L.; Liao, S.L. Hesitant fuzzy rough set over two universes and its application in decision making. Soft Comput. 2017, 21, 1803–1816. [Google Scholar] [CrossRef]
  25. Zhang, H.D.; Shu, L.; Liao, S.L.; Xia, C.R. Dual hesitant fuzzy rough set and its application. Soft Comput. 2017, 21, 3287–3305. [Google Scholar] [CrossRef]
  26. Liang, D.C.; Liu, D. A novel risk decision making based on decision-theoretic rough sets under hesitant fuzzy information. IEEE Trans. Fuzzy Syst. 2015, 23, 237–247. [Google Scholar] [CrossRef]
  27. Li, X.; Huang, X.J. A novel three-way investment decisions based on decision-theoretic rough sets with hesitant fuzzy information. Int. J. Fuzzy Syst. 2020, 22, 2708–2719. [Google Scholar] [CrossRef]
  28. Liang, D.C.; Xu, Z.S.; Liu, D. A new aggregation method-based error analysis for decision-theoretic rough sets and its application in hesitant fuzzy information systems. IEEE Trans. Fuzzy Syst. 2017, 25, 1685–1697. [Google Scholar] [CrossRef]
  29. Mi, C.L.; Yang, Y.F.; Xu, J. A new attribute reduction algorithm based on classification closeness function. Inf. Comput. Appl. 2010, 106, 523–530. [Google Scholar]
  30. Zhang, X.Y.; Wei, L.; Xu, W.H. Attributes reduction and rules acquisition in an lattice-valued information system with fuzzy decision. Int. J. Mach. Learn. Cybern. 2017, 8, 135–147. [Google Scholar] [CrossRef]
  31. Zhong, Y.; Chen, H.M.; Li, T.R.; Yu, Z.; Sang, B.B.; Luo, C. Unsupervised attribute reduction for mixed data based on fuzzy rough sets. Inf. Sci. 2021, 572, 67–87. [Google Scholar]
  32. Chen, J.Y.; Zhu, P. A variable precision multigranulation rough set model and attribute reduction. Soft Comput. 2022, 27, 85–106. [Google Scholar] [CrossRef]
Table 1. A dominance-based hesitant fuzzy information system.
Table 1. A dominance-based hesitant fuzzy information system.
c 1 c 2 c 3 c 4 c 5
x 1 { 0.5 , 0.4 , 0.3 } { 0.6 , 0.5 , 0.4 } { 0.5 , 0.2 , 0.2 , 0.3 } { 0.7 , 0.4 , 0.1 } { 0.4 , 0.2 }
x 2 { 0.7 , 0.3 , 0.2 } { 0.5 , 0.1 } { 0.7 , 0.6 , 0.5 } { 0.8 , 0.7 , 0.3 } { 0.4 , 0.3 , 0.2 }
x 3 { 0.8 , 0.4 , 0.3 } { 0.8 , 0.7 , 0.6 } { 0.9 , 0.7 } { 0.8 , 0.3 , 0.1 } { 0.9 , 0.6 , 0.3 }
x 4 { 0.7 , 0.6 , 0.5 } { 0.8 , 0.5 , 0.5 } { 0.9 , 0.7 , 0.2 } { 0.6 , 0.8 } { 0.9 , 0.7 , 0.6 , 0.6 }
x 5 { 0.9 , 0.6 , 0.3 } { 0.9 , 0.7 , 0.5 } { 0.8 , 0.4 , 0.3 } { 0.5 , 0.5 , 0.3 , 0.3 } { 0.8 , 0.4 , 0.3 }
x 6 { 0.4 , 0.2 } { 0.4 } { 0.5 , 0.4 , 0.3 } { 0.8 , 0.6 , 0.4 } { 0.8 , 0.5 , 0.2 }
x 7 { 0.5 , 0.5 , 0.2 } { 0.6 , 0.6 , 0.3 } { 1.0 , 0.4 , 0.4 } { 0.7 , 0.7 , 0.4 } { 0.7 , 0.6 , 0.5 }
x 8 { 0.7 , 0.5 , 0.3 } { 0.8 , 0.5 , 0.2 } { 0.7 , 0.7 , 0.1 } { 0.9 , 0.6 , 0.3 } { 0.9 , 0.5 , 0.4 }
Table 2. The dominating and dominated classes induced by R C .
Table 2. The dominating and dominated classes induced by R C .
[ x ] C [ x ] C
x 1 { x 1 , x 4 , x 5 } { x 1 }
x 2 { x 2 } { x 2 }
x 3 { x 3 } { x 3 }
x 4 { x 4 } { x 1 , x 4 , x 6 , x 8 }
x 5 { x 5 } { x 1 , x 5 }
x 6 { x 4 , x 6 , x 7 } { x 6 }
x 7 { x 7 } { x 6 , x 7 }
x 8 { x 4 , x 8 } { x 8 }
Table 3. The dominating and dominated classes induced by R B .
Table 3. The dominating and dominated classes induced by R B .
[ x ] B [ x ] B
x 1 { x 1 , x 3 , x 4 , x 5 } { x 1 }
x 2 { x 2 , x 3 , x 4 , x 5 , x 7 , x 8 } { x 2 }
x 3 { x 3 } { x 1 , x 2 , x 3 , x 6 }
x 4 { x 4 } { x 1 , x 2 , x 4 , x 6 , x 7 , x 8 }
x 5 { x 5 } { x 1 , x 2 , x 5 , x 6 }
x 6 { x 3 , x 4 , x 5 , x 6 , x 7 , x 8 } { x 6 }
x 7 { x 4 , x 7 } { x 2 , x 6 , x 7 }
x 8 { x 4 , x 8 } { x 2 , x 6 , x 8 }
Table 4. A dominance-based hesitant fuzzy decision information system.
Table 4. A dominance-based hesitant fuzzy decision information system.
c 1 c 2 c 3 c 4 c 5 d
x 1 { 0.5 , 0.4 , 0.3 } { 0.6 , 0.5 , 0.4 } { 0.5 , 0.2 , 0.2 , 0.3 } { 0.7 , 0.4 , 0.1 } { 0.4 , 0.2 } 2
x 2 { 0.7 , 0.3 , 0.2 } { 0.5 , 0.1 } { 0.7 , 0.6 , 0.5 } { 0.8 , 0.7 , 0.3 } { 0.4 , 0.3 , 0.2 } 4
x 3 { 0.8 , 0.4 , 0.3 } { 0.8 , 0.7 , 0.6 } { 0.9 , 0.7 } { 0.8 , 0.3 , 0.1 } { 0.9 , 0.6 , 0.3 } 1
x 4 { 0.7 , 0.6 , 0.5 } { 0.8 , 0.5 , 0.5 } { 0.9 , 0.7 , 0.2 } { 0.6 , 0.8 } { 0.9 , 0.7 , 0.6 , 0.6 } 3
x 5 { 0.9 , 0.6 , 0.3 } { 0.9 , 0.7 , 0.5 } { 0.8 , 0.4 , 0.3 } { 0.5 , 0.4 , 0.3 } { 0.8 , 0.4 , 0.3 } 4
x 6 { 0.4 , 0.2 } { 0.6 , 0.3 , 0.3 } { 0.5 , 0.4 , 0.3 } { 0.8 , 0.6 , 0.4 } { 0.8 , 0.5 , 0.2 } 3
x 7 { 0.5 , 0.5 , 0.2 } { 0.6 , 0.6 , 0.3 } { 1.0 , 0.4 , 0.4 } { 0.7 , 0.7 , 0.4 } { 0.7 , 0.6 , 0.5 } 3
x 8 { 0.7 , 0.5 , 0.3 } { 0.8 , 0.5 , 0.2 } { 0.7 , 0.7 , 0.1 } { 0.9 , 0.6 , 0.3 } { 0.9 , 0.5 , 0.4 } 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bao, Y.; Cheng, S. Dominance-Based Rough Set Model in Hesitant Fuzzy Information Systems. Symmetry 2024, 16, 1190. https://doi.org/10.3390/sym16091190

AMA Style

Bao Y, Cheng S. Dominance-Based Rough Set Model in Hesitant Fuzzy Information Systems. Symmetry. 2024; 16(9):1190. https://doi.org/10.3390/sym16091190

Chicago/Turabian Style

Bao, Yanling, and Shumin Cheng. 2024. "Dominance-Based Rough Set Model in Hesitant Fuzzy Information Systems" Symmetry 16, no. 9: 1190. https://doi.org/10.3390/sym16091190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop