Next Article in Journal
Decision Uncertainty from Strict Preferences in Sequential Search Scenarios with Multiple Criteria
Previous Article in Journal
First ElGamal Encryption/Decryption Scheme Based on Spiking Neural P Systems with Communication on Request, Weights on Synapses, and Delays in Rules
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generalized Multigranulation Rough Set Model by Synthesizing Optimistic and Pessimistic Attitude Preferences

1
Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China
2
Fujian Provincial Key Laboratory of Data-Intensive Computing, Quanzhou 362000, China
3
Fujian University Laboratory of Intelligent Computing and Information Processing, Quanzhou 362000, China
4
Big Data Institute, Central South University, Changsha 410075, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1367; https://doi.org/10.3390/math13091367
Submission received: 4 April 2025 / Revised: 17 April 2025 / Accepted: 18 April 2025 / Published: 22 April 2025
(This article belongs to the Special Issue Advances in Fuzzy Rough Sets and Intelligent Computing)

Abstract

:
Attitude preference plays an important role in multigranulation data mining and decision-making. That is, different attitude preferences lead to different results. At present, both optimistic and pessimistic multigranulation rough sets have been studied independently and thoroughly. But, sometimes, a decision-maker’s attitude may vary, which may shift either from an optimistic to pessimistic view of decision-making or from a pessimistic to optimistic view of decision-making. In this paper, we propose a novel multigranulation rough set model, which synthesizes optimistic and pessimistic attitude preferences. Specifically, we put forward methods to evaluate the attitude preferences in four types of decision systems. Two main issues are addressed with regard to attitude preference dependency. The first is concerned with the common attitude preference, while the other relates to the sequence-dependent attitude preference. Finally, we present three types of multigranulation rough set models from the perspective of the different connection methods between optimistic and pessimistic attitude preferences.

1. Introduction

Rough set theory was originally proposed by Pawlak [1] in 1982 as a new mathematical tool to measure uncertain concepts. As is well known, in the standard version, the notion of a rough set was considered to be a formal approximation of a crisp set with a pair of sets that were respectively called the lower and upper approximations of the crisp set. Since then, this promising research tool has attracted many researchers, and the development of rough sets, extensions, generalizations, and applications has continued to evolve. Now, it has become an extremely useful approach in data mining, knowledge discovery, and machine learning as an important component of hybrid solutions [2,3].
In the standard version of rough sets, equivalence relations were used as the building blocks to characterize more complicated concepts. However, this may not be valid in more complicated information systems. In order to solve this problem, some important studies have focused on extensions and generalizations of the standard version of a rough set. As a result, the developed rough set theory was used in a broader area than ever before and no longer confined to the traditional information systems [4,5,6,7,8]. Then, some novel rule acquisition methods in different kinds of decision systems [9,10] were explored as their by-products.
From the perspective of granular computing, named by Zadeh [11] and recently developed by many experts [12,13,14], the above kinds of rough set models in fact discussed the issue of characterizing a given set by only one granulation. But, sometimes, we have to solve a problem involving multiple granulations in many real applications. For instance, multiple granulations were used for describing a multi-scale dataset [15], and how to select an optimal granulation was investigated for extracting the best rules. Multi-source information fusion approaches induced by pessimistic multigranulation rough sets and optimistic multigranulation rough sets [16,17] were studied and shown to be better than the classical rough sets in terms of decision-making, which were further generalized to cater to datasets equipped with incomplete, neighborhood, covering, or fuzzy attributes [18,19,20,21]. Moreover, the pessimistic and optimistic multigranulation rough sets immediately led to the development of new types of decision rules that can be measured by support and certainty factors [16,17]. In recent years, some new results have been obtained. For example, Tan et al. [22] applied evidence theory to the numerical characterization of multigranulation rough sets in incomplete information systems. Qian et al. proposed local multigranulation decision-theoretic rough sets [23] as a semi-unsupervised learning method. She et al. developed a multiple-valued logic method for a multigranulation rough set model [24].
Note that the theory of three-way decisions proposed by Yao [3] and further investigated by other researchers [25,26,27,28,29] can be unified with the description framework of multigranulation rough sets and the superiority of concept lattices for allowing more applications. For example, Qian et al. [30] established a new decision-theoretic rough set from the perspective of multigranulation rough sets. Sun et al. [31] discussed three-way group decision-making over two universes using multigranulation fuzzy decision-theoretic rough sets.
In the aforementioned studies, learning cost was not deliberately considered. However, cost-sensitive learning problems frequently appear in many real applications, such as medical treatment, machine fault diagnosis, equipment-automated testing, buffer management, internet-based distributed systems, and others [32]. Note that, in these fields, the main cost may be different, and it includes money, skilled labor, time, memory, bandwidth, and the ways we should pay. In the past decade, cost-sensitive learning systems with different types of costs were intensively studied, including learning cost [33,34], test cost [35], and both of them simultaneously [36,37]. In the meanwhile, cost-sensitive decision systems focused on decision-making problems were also investigated in [35,38,39].
Facing various kinds of costs, different people may have different considerations, which lead to different attitude preferences. For example, rich people may choose to save labor force or time cost, while ordinary ones may take money as their first concern. One may be optimistic on some costs but be pessimistic on others. In other words, the state of optimist or pessimist may not be stable for complicated problems, and it depends on the resources one has and the situations one faces.
Motivated by the above problem, the current study is concerned with the settlement of the unstable attitude preference under a multi-source information fusion environment. More specifically, we provide a method of evaluating attitude preference. Then, F-multigranulation rough sets are proposed to synthesize optimistic and pessimistic attitude preferences, which contain three categories: F + -multigranulation rough sets, F -multigranulation rough sets, and F -multigranulation rough sets. Moreover, the relationships among the three types of multigranulation rough sets are analyzed.
The rest of this paper is organized as follows. Section 2 reviews some basic notions related to the classical, pessimistic, and optimistic multigranulation rough sets and concept lattice and introduces their induced rules accordingly. Section 3 develops some useful methods to evaluate the attitude preferences in four kinds of decision systems, where two main issues are addressed with regard to attitude preference dependency. The first is concerned with the common attitude preference, while the other relates to the sequence-dependent attitude preference. Section 4 puts forward three types of multigranulation rough set models based on the different connection methods between optimistic and pessimistic attitude preferences. Section 5 concludes the paper with a brief summary and an outlook for our forthcoming study.

2. Preliminaries

In this section, some basic notions are recalled to make our paper self-contained.

2.1. Classical Rough Set Model

In rough set theory, it starts with an information system and can be defined formally as follows.
Definition 1
([1]). An information system is a tuple S = ( U , A T ) , where U is a universe of discourse and A T is a non-empty finite attribute set.
In fact, for any attribute a of A T , a ( x i ) is denoted to map an object x i of U to exactly one value. Note that the value category will divide the information systems into different classes (e.g., classical, fuzzy, or interval-valued information systems). In this paper, we only consider the classical information systems.
Moreover, when making decisions, we need to extend an information system to a decision system S = ( U , A T D ) . In other words, compared to an information system, a decision system contains a decision attribute set D additionally.
Moreover, with each A t A T , we can define an equivalence relation
I N D ( A t ) = { ( x i , x j ) U × U : a A t , a ( x i ) = a ( x j ) } ,
which can be described as des ( [ x i ] A t ) = a A t ( a , a ( x i ) ) from a semantic view, where [ x i ] A t = { x j U : ( x i , x j ) I N D ( A t ) } . Note that all [ x i ] A t induced by the equivalence relation I N D ( A t ) form a partition of the universe of discourse U. Furthermore, for a target set X U , we call [ A t ̲ ( X ) , A t ¯ ( X ) ] the classical rough set of X with respect to A t , where
A t ̲ ( X ) = { x i U : [ x i ] A t X } and A t ¯ ( X ) = { x i U : [ x i ] A t X } .

2.2. Multigranulation Rough Sets and the Corresponding Rules

It can be seen from Section 2.1 that, in the standard version, rough sets are defined based on a single equivalence relation. Similarly, by extending a single equivalence relation to a family of them, multigranulation rough sets can be developed.
Definition 2
([17]). Let S be an information system and A 1 , A 2 , , A s A T . Then, we call the ordered pair [ t = 1 s A t P ̲ ( X ) , t = 1 s A t P ¯ ( X ) ] pessimistic multigranulation rough set of X with respect to the attribute sets A 1 , A 2 , , A s , where
t = 1 s A t P ̲ ( X ) = { x i U : [ x i ] A 1 X [ x i ] A 2 X [ x i ] A s X }
t = 1 s A t P ¯ ( X ) = t = 1 s A t P ̲ ( X ) .
Let S = ( U , A T D ) be a decision system and D = { d } . Then, x i t = 1 s A t P ̲ ( [ x ] { d } ) generates an “AND” decision rule as follows:
r x i : t = 1 s d e s ( [ x i ] A t ) d e s ( [ x i ] { d } ) .
Definition 3
([16]). Let S be an information system and A 1 , A 2 , , A s A T . Then, we call the ordered pair [ t = 1 s A t O ̲ ( X ) , t = 1 s A t O ¯ ( X ) ] optimistic multigranulation rough set of X with respect to the attribute sets A 1 , A 2 , , A s , where
t = 1 s A t O ̲ ( X ) = { x i U : [ x i ] A 1 X [ x i ] A 2 X [ x i ] A s X }
t = 1 s A t O ¯ ( X ) = t = 1 s A t O ̲ ( X ) .
Let S = ( U , A T D ) be a decision system and D = { d } . Then, x i t = 1 s A t O ̲ ( [ x ] { d } ) leads to an “OR” decision rule as follows:
r x i : t = 1 s ( d e s ( [ x i ] A t ) d e s ( [ x i ] { d } ) ) .

3. A Method to Evaluate Attitude Preferences

This section puts forward a method to evaluate attitude preferences. Let S = ( U , A T ) be an information system. On one hand, we define an attribute subset preference function as
v : 2 A T R ,
where R is the set of real numbers. On the other hand, we define an attribute subset preference mapping as
m : 2 A T { O , P } ,
where O and P denote optimistic and pessimistic attitudes, respectively. Furthermore, the attribute subset preference function v and mapping m are related as follows: for any A A T ,
m ( A ) = { O , if v ( A ) 0 , P , o t h e r w i s e .
Note that the detailed value of v will be set under certain types of information systems to be discussed below.
In what follows, we will discuss four types of information systems with regard to attitude preference dependency, and the connections between the four types of information systems are shown in Figure 1, where the system above is a special case of the system below; in other words, the system above can degenerate into the system below.
To aid reader orientation, we provide a glossary of key notations before proceeding.
S = ( U , A T , v ) : attitude-preference-independent system.
S = ( U , A T , v , g , g c ) : simple common attribute preference system.
S = ( U , A T , v , g r , g c ) : complex common attribute preference system.
S = ( U , A T , v , c v ) : sequence-dependent attitude preference system.
t = 1 s A t F + ̲ ( X ) : F + multigranulation lower approximations of X.
t = 1 s A t F + ¯ ( X ) : F + multigranulation upper approximations of X.
t = 1 s A t F ̲ ( X ) : F multigranulation lower approximations of X.
t = 1 s A t F ¯ ( X ) : F multigranulation upper approximations of X.
t = 1 s A t F ̲ ( X ) : F multigranulation lower approximations of X.
t = 1 s A t F ¯ ( X ) : F multigranulation upper approximations of X.

3.1. Attitude-Preference-Independent Systems

In many real applications, attitude preferences are independent. In other words, after taking test-cost attitudes into consideration, we can define a type of new information system.
Definition 4.
An attitude-preference-independent system is the 3-tuple S = ( U , A T , v ) , where v : A T [ 1 , 1 ] is an attribute preference degree function.
An attribute preference degree function can easily be represented by a vector v = < v ( a 1 ) , v ( a 2 ) , , v ( a | A T | ) > . Note that, in accordance with Equation (7),
(i) we take optimistic attitude on an attribute a if 0 v ( a ) 1 ;
(ii) otherwise, we take pessimistic attitude on an attribute a if 1 v ( a ) < 0 .
In this paper, for any A A T with at least two attitudes, we use v ( A ) to denote the attitude preference degree of A when all of its attributes have been taken into consideration simultaneously. It is apparent that, for any a A T , we have v ( { a } ) = v ( a ) .
In an attitude-preference-independent system S = ( U , A T , v ) , we have
v ( A ) = a A v ( a ) f o r   a n y A A T ,
which indicates the independence property of attitude preference.
Example 1.
Let S = ( U , A T , v ) be an attitude-preference-independent system, where the attribute preference degree function v is shown in Table 1 and A T = { M o n e y , L a b o r , T i m e , M e m o r y , B a n d w i d t h } . For A = { M o n e y , L a b o r , M e m o r y } , we have v ( A ) = v ( M o n e y ) + v ( L a b o r ) + v ( M e m o r y ) ) = 0.9 + 0.8 + ( 0.8 ) = 0.9 and m ( A ) = O . That is, we will take an optimistic attitude for A.

3.2. Simple Common Attribute Preference Systems

In some real applications, a group of attributes share a simple degree of preference for common attributes. For example, a blood sample collected from a patient can be used to test a set of indicators in diagnosis. In many cases, some indicators are closely related. As a consequence, their preference degrees are also related, and they may share a common attribute preference degree for various types of blood tests [32].
Definition 5.
A simple common attribute preference system is the 5-tuple S = ( U , A T , v , g , g c ) , where g : A T { 1 , , K } , 1 K | A T | is the simple group-membership function, and g c : { 1 , , K } ( 1 , 1 ) is the group common attribute preference degree function. The kth group ( 1 k K ) is constituted by G k = { a : g ( a ) = k } , and attributes in the group G k share a common test-cost attitude 0 < g c ( k ) min a G k { v ( a ) } .
Let A A T and a G k A . Then, we have
v ( A { a } ) = { v ( A ) + v ( a ) g c ( k ) , i f A G k , v ( A ) + v ( a ) , o t h e r w i s e .
Similar to the case in Section 3.1,
(i) if the kth group represents optimistic attitudes, then 0 g c ( k ) < 1 ;
(ii) otherwise, if the kth group represents pessimistic attitudes, then 1 < g c ( k ) 0 .
Example 2.
Let S = ( U , A T , v , g , g c ) be a simple common attribute preference system, where the attribute preference degree function v, the group-membership function g, and the group common attribute preference degree function g c are shown in Table 1, Table 2 and Table 3, respectively. For A = { L a b o r , T i m e , M e m o r y , B a n d w i d t h } , by a recursive method, we have
v ( A ) = v ( L a b o r ) + v ( T i m e ) + v ( M e m o r y ) + v ( B a n d w i d t h ) g c ( 2 ) g c ( 3 ) = 0.8 + 0.7 + ( 0.8 ) + ( 0.9 ) 0.2 ( 0.25 ) = 0.15 .
Then, based on Equation (8), it follows that m ( A ) = P , which means that we will be pessimistic when facing the factors labor, time, memory, and bandwidth simultaneously.

3.3. Complex Common Attribute Preference Systems

In a simple common attribute preference system, an attribute is assumed to be exactly included in one group. But, on some occasions, an attribute may belong to more than one group. The following model caters to such a situation.
Definition 6.
A complex common attribute preference system is the 5-tuple S = ( U , A T , v , g r , g c ) , where g r : { 1 , 2 , , K } 2 | A T | is the complex group-membership function. The kth group ( 1 k K ) is constituted by G k = g r ( k ) , and attributes in the group G k share a common attribute preference degree.
Let A A T and a A T A . Then, we have
v ( A { a } ) = v ( A ) + v ( a ) 1 k K φ k ( A , a ) g c ( k )
where the function φ k ( A , a ) is defined as
φ k ( A , a ) = { 1 , i f A G k a n d a G k , 0 , o t h e r w i s e .
It is interesting that Equation (10) will be degenerated into Equation (9) when the attribute a belongs to one group only. For example, suppose that the attribute a belongs to the group t ( 1 t K ) only. Then,
1 k K φ k ( A , a ) g c ( k ) = φ t ( A , a ) g c ( t ) ,
and, in this case, Equation (10) will be degenerated into
v ( A { a } ) = { v ( A ) + v ( a ) g c ( t ) , i f A G t , v ( A ) + v ( a ) , o t h e r w i s e .
Moreover, for better understanding of the complex group-membership function g r , it can be represented by a matrix
g r = ( g r i j ) K × | A T | ,
where the row i represents the ith group G i , the column j represents the jth attribute a j , and the value g r i j in the cross of the ith row and jth column is defined as
g r i j = { 1 , i f a j b e l o n g s   t o   t h e   g r o u p G i , i . e . a j g r ( i ) , 0 , o t h e r w i s e .
Example 3.
Let S = ( U , A T , v , g r , g c ) be a complex common attribute preference system, where the attribute preference degree function v, the complex group-membership function g r , and the group common attribute preference degree function g c are shown in Table 1, Table 4 and Table 3, respectively. According to the above discussion, Table 4 can be equivalently transformed into Table 5. That is, g r = 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 .
Let A = { Money , Labor , Time , Memory , Bandwidth } . Then, we have
v ( A ) = v ( { Money , Labor , Time , Memory } { Bandwidth } ) = v ( { Money , Labor , Time , Memory } ) + v ( Bandwidth ) g c ( 3 ) = v ( { Money , Labor , Time } { Memory } ) + v ( Bandwidth ) g c ( 3 ) = v ( { Money , Labor , Time } ) + v ( Memory ) + v ( Bandwidth ) g c ( 3 ) = v ( { Money , Labor } { Time } ) + v ( Memory ) + v ( Bandwidth ) g c ( 3 ) = v ( { Money , Labor } ) + v ( Time ) g c ( 2 ) + v ( Memory ) + v ( Bandwidth ) g c ( 3 ) = v ( Money ) + v ( Labor ) g c ( 1 ) + v ( T i m e ) g c ( 2 ) + v ( Memory ) + v ( Bandwidth ) g c ( 3 ) = v ( Money ) + v ( Labor ) + v ( Time ) + v ( Memory ) + v ( Bandwidth ) g c ( 1 ) g c ( 2 ) g c ( 3 ) = 0.9 + 0.8 + 0.7 + ( 0.8 ) + ( 0.9 ) 0.1 0.2 ( 0.25 ) = 0.65 .
Furthermore, we have m ( A ) = O , which means that we will take an optimistic attitude when facing all the factors, money, labor, time, memory, and bandwidth, simultaneously.

3.4. Sequence-Dependent Attitude Preference Systems

In some real applications, if we take execution order into consideration, the outcome may be different. For example, one possible way is to take money as our first priority. Then, we will save labor and time simultaneously. On the other hand, if we choose to complete the work by ourselves, then we can save some labor force and money, but the cost of time cannot be reduced.
Definition 7.
A sequence-dependent attitude preference system is the 4-tuple S = ( U , A T , v , c v ) , where c v : 2 A T × 2 A T ( 1 , 1 ) is the conditional attitude preference degree function. That is, for A 1 A T and A 2 A T , c v ( A 2 | A 1 ) is the attitude preference degree of A 2 after the attributes of A 1 have been considered.
As mentioned at the beginning of Section 3, v ( A ) is the attitude preference degree of A when all the attributes of A have been taken into consideration simultaneously.
Note that the simplest way of representing the function v is to use a 2 | A T | -dimensional vector. Moreover, the simplest representation of the function c v is to use a 2 | A T | × 2 | A T | matrix as follows:
c v = ( c v ( A 1 | A 2 ) ) 2 | A T | × 2 | A T | .
Let s v ( A 1 , A 2 ) denote the attitude preference degree of a sequence A 1 , A 2 . Then,
s v ( A 1 , A 2 ) = v ( A 1 ) + c v ( A 2 | A 1 ) .
Moreover, by using the above formula recursively, we can compute the attitude preference degree of a sequence of A 1 , A 2 , ⋯, A n . That is,
s v ( A 1 , A 2 , , A n ) = s v ( A 1 , A 2 , , A n 1 ) + c v ( A n | A 1 A 2 A n 1 ) = v ( A 1 ) + c v ( A 2 | A 1 ) + + c v ( A n | A 1 A 2 A n 1 ) .
Apparently, there may be different execution orders for a sequence of A 1 , A 2 , ⋯, A n . For our purpose, we can choose such an execution order for a given sequence that can bring the highest attitude preference degree.
Definition 8
([40]). For A 1 A T and A 2 A T , let A 1 A 2 = . If
s v ( A 1 , A 2 ) = s v ( A 2 , A 1 ) = v ( A 1 ) + v ( A 2 ) ,
we hold that A 1 and A 2 are sequence-independent.
Proposition 1.
For A 1 A T and A 2 A T , let A 1 A 2 = . If A 1 and A 2 are sequence-independent, then
c v ( A 1 | A 2 ) = v ( A 1 ) a n d c v ( A 2 | A 1 ) = v ( A 2 ) .
Proof. 
Since A 1 and A 2 are sequence-independent, it follows s v ( A 1 , A 2 ) = s v ( A 2 , A 1 ) = v ( A 1 ) + v ( A 2 ) . On the other hand, we have s v ( A 1 , A 2 ) = v ( A 1 ) + c v ( A 2 | A 1 ) . To sum up, we obtain c v ( A 2 | A 1 ) = v ( A 2 ) .
Moreover, c v ( A 1 | A 2 ) = v ( A 1 ) can be proved in a similar manner. □
It should be pointed out that simultaneously is most effective while separately is least effective. That is,
v ( A 1 A 2 ) s v ( A 1 , A 2 ) v ( A 1 ) + v ( A 2 ) .
It is evident that, if A 1 and A 2 are attitude-independent (e.g., as in Section 3.1), then A 1 and A 2 are sequence-independent.
Example 4.
Consider three symptoms caused by flu, which are fever, headache, and cough. For simplicity, we use f, h, and c to represent fever, headache, and cough, respectively.
First, as fever will cause headache and soon follows cough, fever is our first concern. Then, it is reasonable that we will take pessimistic attitude on fever and take optimistic attitude on headache and cough.
Second, as these three symptoms are correlated with each other, then the degree of s v ( a | b ) is less than that of v ( b ) and the degree of s v ( a b | c ) is less than that of v ( a b ) , where a , b , c { f e v e r , h e a d a c h e , c o u g h } .
Finally, by rating the importance of these three symptoms as well as their combinations, we can assign the values of the preference degree functions v and s v .
As another example, without giving the details, we provide one reasonable assignment of functions v and s v as follows:
v ( f ) = 0.95 , v ( h ) = 0.85 , v ( c ) = 0.85 , c v ( f | h ) = 0.9 , c v ( h | f ) = 0.8 , c v ( f | c ) = 0.9 , c v ( c | f ) = 0.8 , c v ( f | { h } { c } ) = 0.8 , c v ( { h } { c } | f ) = 1.6 , c v ( h | c ) = 0.8 , c v ( c | h ) = 0.8 , c v ( h | { f } { c } ) = 0.7 , c v ( { f } { c } | h ) = 0.2 , c v ( c | { f } { h } ) = 0.7 , c v ( { f } { h } | c ) = 0.15 .
Then, we have
s v ( h , c , f ) = s v ( h , c ) + c v ( f | { h } { c } ) = v ( h ) + c v ( c | h ) + c v ( f | { h } { c } ) = 0.85 + 0.8 + ( 0.8 ) = 0.85 ,
and m ( h , c , f ) = O follows. In other words, we will take an optimistic attitude when facing all the symptoms, fever, headache, and cough, simultaneously.

4. F Multigranulation Rough Sets: A Useful Way to Synthesize Optimistic and Pessimistic Attitudes

In this section, according to the different ways of synthesizing optimistic and pessimistic attitudes, we put forward F + multigranulation rough sets, F multigranulation rough sets, and F multigranulation rough sets. Moreover, we uniformly call these three kinds of multigranulation rough sets F multigranulation rough sets. Furthermore, in order to distinguish F multigranulation rough sets from the classical ones, we call the classical optimistic and pessimistic multigranulation rough sets Q multigranulation rough sets.
In what follows, for the sake of clarity, any subset A j of A T is assumed to take a certain attitude preference. That is, m ( A j ) can be computed by the four methods introduced in Section 3.

4.1. F + Multigranulation Rough Sets and Their Induced Rules

Before embarking on establishing the F + multigranulation rough set model, we need to emphasize that the information systems to be discussed below are among the four attitude preference systems introduced in Section 3. If one does not want to point out which kind of attitude preference system he/she discusses, then it can simply be called an attitude preference system.
Definition 9.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , T = { 1 , 2 , , s } be an index set, and m ( A t ) be the attitude preference of A t . Then, the F + multigranulation lower and upper approximations of X are, respectively, defined as
t = 1 s A t F + ̲ ( X ) = x U : m ( A i ) = P , i T [ x ] A i X m ( A j ) = O , j T [ x ] A j X
t = 1 s A t F + ¯ ( X ) = t = 1 s A t F + ̲ ( X ) ,
where ∨ and ∧ are the logical disjunction and conjunction operators, respectively, and the symbol F + means that an optimistic attitude is adopted in synthesizing optimistic and pessimistic attitudes.
The pair t = 1 s A t F + ̲ ( X ) , t = 1 s A t F + ¯ ( X ) is referred to as the F + multigranulation rough set of X with respect to the attribute sets A 1 , A 2 , , A s .
According to this definition, for any x t = 1 s A t F + ̲ ( [ x ] { d } ) , it induces the following “AND–OR” decision rule from an attitude preference decision system:
r x : m ( A i ) = P , i T d e s ( [ x ] A i ) m ( A j ) = O , j T d e s ( [ x ] A j ) d e s ( [ x ] { d } ) ,
and at least one of m ( A i ) = P , i T d e s ( [ x ] A i ) d e s ( [ x ] { d } ) and d e s ( [ x ] A j ) d e s ( [ x ] { d } ) is true, where m ( A j ) = O , j T .
Moreover, the support factor of the rule r x is defined as
S u p p m ( A i ) = P , i T d e s ( [ x ] A i ) m ( A j ) = O , j T d e s ( [ x ] A j ) d e s ( [ x ] { d } ) = max ( α , β ) ,
where α = min | [ x ] A i [ x ] { d } | | U | : m ( A i ) = P and β = max | [ x ] A j [ x ] { d } | | U | : m ( A j ) = O .
Moreover, the certainty factor of this rule is defined as
C e r m ( A i ) = P , i T d e s ( [ x ] A i ) m ( A j ) = O , j T d e s ( [ x ] A j ) d e s ( [ x ] { d } ) = max ( ν , ξ ) ,
where ν = min | [ x ] A i [ x ] { d } | | [ x ] A i | : m ( A i ) = P and ξ = max | [ x ] A j [ x ] { d } | | [ x ] A j | : m ( A j ) = O .
Example 5.
Consider descriptions of six persons in Table 6 who suffer (or do not suffer) from flu. For each person, four indicators that may cause flu are considered: sneeze, temperature, headache, and cough. For example, person x 1 does not cough, has a normal temperature, and does not suffer from headache and cough, so it can be seen that x 1 is not a flu-infected patient.
Let U = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 } , A T = { a 1 , a 2 , a 3 , a 4 } ( a 1 : sneeze, a 2 : temperature, a 3 : headache, a 4 : cough, and d: flu). According to the empirical knowledge of experts, sneezing is a characterization of bacteria infection that leads to a change in body temperature. Soon come headache and cough. In this circumstance, for the treatment of flu, the importance of sneezing will be ranked first, followed by temperature. So, it is reasonable that
v ( a 1 ) < v ( a 2 ) < 0 , v ( a 3 ) > 0 and v ( a 4 ) > 0 .
Take A 1 = { a 1 } , A 2 = { a 3 } , A 3 = { a 3 } , A 4 = { a 4 } , and then the attitude we take in each attitude can be shown in Table 7. Concretely, the attitude on A 1 and A 2 is positive and the attitude on A 3 and A 4 is negative.
We can compute the partitions of the universe of discourse:
U / I N D ( A 1 ) = { { x 1 , x 2 , x 3 } , { x 4 , x 5 , x 6 , x 7 } } ,
U / I N D ( A 2 ) = { { x 1 , x 2 } , { x 3 , x 4 , x 5 } , { x 6 , x 7 } } ,
U / I N D ( A 3 ) = { { x 1 , x 2 } , { x 3 , x 4 , x 5 , x 7 } , { x 6 } } ,
U / I N D ( A 4 ) = { { x 1 } , { x 2 , x 3 , x 4 , x 7 } , { x 5 , x 6 } } ,
U / I N D ( { d 1 , d 2 } ) = { { x 1 , x 2 , x 3 } , { x 4 , x 5 , x 6 , x 7 } } .
We have
{ A 1 + A 2 + A 3 + A 4 } ̲ F + ( { x 1 , x 2 , x 3 } ) = { x 1 , x 2 }
{ A 1 + A 2 + A 3 + A 4 } ̲ F + ( { x 4 , x 5 , x 6 , x 7 } ) = { x 5 , x 6 , x 7 } .
Then, we obtain the following “AND–OR” decision rules from the attitude preference decision system S in Table 6 using F + multigranulation rough sets:
r x 1 : ( ( Sneeze , No ) ( Temperature , Normal ) ) ( Headache , No ) ( Cough , Never ) ( Flu , No ) ; r x 2 : ( ( Sneeze , No ) ( Temperature , Normal ) ) ( Headache , No ) ( Cough , Sometimes ) ( Flu , No ) ; r x 5 : ( ( Sneeze , Yes ) ( Temperature , Slightlyhigh ) ) ( Headache , Alittle ) ( Cough , Frequently ) ( Flu , Yes ) ; r x 6 : ( ( Sneeze , Yes ) ( Temperature , High ) ) ( Headache , Serious ) ( Cough , Frequently ) ( Flu , Yes ) ; r x 7 : ( ( Sneeze , Yes ) ( Temperature , High ) ) ( Headache , Alittle ) ( Cough , Sometimes ) ( Flu , Yes ) .
Moreover, we have
S u p p ( r x 1 ) = max min | [ x 1 ] A 1 [ x 1 ] { d } | | U | , | [ x 1 ] A 2 [ x 1 ] { d } | | U | , | [ x 1 ] A 3 [ x 1 ] { d } | | U | , | [ x 1 ] A 4 [ x 1 ] { d } | | U | = max min 3 7 , 2 7 , 2 7 , 1 7 = 2 7 ,
C e r ( r x 1 ) = max min | [ x 1 ] A 1 [ x 1 ] { d } | | [ x 1 ] A 1 | , | [ x 1 ] A 2 [ x 1 ] { d } | | [ x 1 ] A 2 | , | [ x 1 ] A 3 [ x 1 ] { d } | | [ x 1 ] A 3 | , | [ x 1 ] A 4 [ x 1 ] { d } | | [ x 1 ] A 4 | = 1 .
Likewise, we can obtain
S u p p ( r x 2 ) = max min 3 7 , 2 7 , 2 7 , 1 7 = 2 7 a n d   C e r ( r x 2 ) = 1 ;
S u p p ( r x 5 ) = max min 4 7 , 2 7 , 3 7 , 2 7 = 3 7 a n d   C e r ( r x 5 ) = 1 ;
S u p p ( r x 6 ) = max min 4 7 , 2 7 , 1 7 , 2 7 = 2 7 a n d   C e r ( r x 6 ) = 1 ;
S u p p ( r x 7 ) = max min 4 7 , 2 7 , 3 7 , 2 7 = 3 7 a n d   C e r ( r x 7 ) = 1 .
Proposition 2.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, the following properties hold:
(i) t = 1 s A t F + ̲ ( ) = t = 1 s A t F + ¯ ( ) = ,
(ii) t = 1 s A t F + ̲ ( U ) = t = 1 s A t F + ¯ ( U ) = U .
Proof. 
At first, we prove t = 1 s A t F + ̲ ( ) = . For any x U and A t A T , we have [ x ] A t due to x [ x ] A t . Therefore, it follows that
x U : m ( A i ) = P , i T [ x ] A i m ( A j ) = O , j T [ x ] A j = .
According to Definition 9, we conclude t = 1 s A t F + ̲ ( ) = .
Secondly, we prove t = 1 s A t F + ̲ ( U ) = U . For any x U and A t A T , we have [ x ] A t U . Therefore, it follows that
x U : m ( A i ) = P , i T [ x ] A i U m ( A j ) = O , j T [ x ] A j U = U .
According to Definition 9, we obtain t = 1 s A t F + ̲ ( U ) = U .
Moreover, we prove t = 1 s A t F + ¯ ( ) = . According to Definition 9, we have
t = 1 s A t F + ¯ ( ) = t = 1 s A t F + ̲ ( ) = t = 1 s A t F + ̲ ( U ) = U = .
Finally, we prove t = 1 s A t F + ¯ ( U ) = U . According to Definition 9, we have
t = 1 s A t F + ¯ ( U ) = t = 1 s A t F + ̲ ( U ) = t = 1 s A t F + ̲ ( ) = = U .
To sum up, we have completed the proof of Proposition 2. □
By Proposition 2, we find that the lower and upper approximations of two special sets (empty set and universe of discourse) are the lower and upper approximations of themselves.
Proposition 3.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, for any X 1 , X 2 U , and X 1 X 2 , the following properties hold:
(i) t = 1 s A t F + ̲ ( X 1 ) t = 1 s A t F + ̲ ( X 2 ) ,
(ii) t = 1 s A t F + ¯ ( X 1 ) t = 1 s A t F + ¯ ( X 2 ) .
Proof .
(i) Since X 1 X 2 , we have
x t = 1 s A t F + ̲ ( X 1 ) x x U : m ( A i ) = P , i T [ x ] A i X 1 m ( A j ) = O , j T [ x ] A j X 1 x x U : m ( A i ) = P , i T [ x ] A i X 2 m ( A j ) = O , j T [ x ] A j X 2 x t = 1 s A t F + ̲ ( X 2 ) .
Hence, we obtain t = 1 s A t F + ̲ ( X 1 ) t = 1 s A t F + ̲ ( X 2 ) .
(ii) Based on the result of (i), we have
X 1 X 2 X 1 X 2 t = 1 s A t F + ̲ ( X 1 ) t = 1 s A t F + ̲ ( X 2 ) t = 1 s A t F + ̲ ( X 1 ) t = 1 s A t F + ̲ ( X 2 ) t = 1 s A t F + ¯ ( X 1 ) t = 1 s A t F + ¯ ( X 2 ) .
To sum up, we have completed the proof of Proposition 3. □
By Proposition 3, we can observe that the lower and upper approximations of a target concept become larger as the size of the target concept increases. Thus, if an inclusion relationship exists between the two sets, their lower and upper approximations also satisfy the inclusion relationship.

4.2. F ° Multigranulation Rough Sets and Their Induced Rules

Definition 10.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , T = { 1 , 2 , , s } be an index set, and m ( A t ) be the attitude preference of A t . Then, the F multigranulation lower and upper approximations of a subset X of U are, respectively, defined as
i = 1 s A i F ̲ ( X ) = x U : m ( A i ) = P , i T [ x ] A i X a n d i = 1 s A i F ¯ ( X ) = i = 1 s A i F ̲ ( X ) ,
where the symbol F represents that only pessimistic attitude is considered when we synthesize optimistic and pessimistic attitudes.
The pair i = 1 s A i F ̲ ( X ) , i = 1 s A i F ¯ ( X ) is referred to as the F multigranulation rough set of X with respect to the attribute sets A 1 , A 2 , , A s .
According to this definition, for any x i = 1 s A i F ̲ ( [ x ] { d } ) , it induces the following “AND” decision rule from an attitude preference decision system:
r x : m ( A i ) = P , i T d e s ( [ x ] A i ) d e s ( [ x ] { d } ) .
Moreover, the support factor of r x is defined as
S u p p m ( A i ) = P , i T d e s ( [ x ] A i ) d e s ( [ x ] { d } ) = min | [ x ] A i [ x ] { d } | | U | : m ( A i ) = P ,
and the certainty factor of this rule is defined as
C e r m ( A i ) = P , i T d e s ( [ x ] A i ) d e s ( [ x ] { d } ) = min | [ x ] A i [ x ] { d } | | [ x ] A i | : m ( A i ) = P .
Example 6.
Continued from Example 5. We have
{ A 1 + A 2 + A 3 + A 4 } ̲ F ( { x 1 , x 2 , x 3 } ) = { x 1 , x 2 }
a n d   { A 1 + A 2 + A 3 + A 4 } ̲ F ( { x 4 , x 5 , x 6 } ) = { x 6 , x 7 } .
Then, we obtain the following “AND” decision rules:
r x 1 , r x 2 : ( S n e e z e , N o ) ( T e m p e r a t u r e , N o r m a l ) ( F l u , N o ) ; r x 6 , r x 7 : ( S n e e z e , Y e s ) ( T e m p e r a t u r e , H i g h ) ( F l u , Y e s ) .
Moreover, we compute
S u p p ( r x 1 ) = S u p p ( r x 2 ) = min | [ x 1 ] A 1 [ x 1 ] { d } | | U | , | [ x 1 ] A 2 [ x 1 ] { d } | | U | = min 3 7 , 2 7 = 2 7 ,
C e r ( r x 1 ) = C e r ( r x 2 ) = min | [ x 1 ] A 1 [ x 1 ] { d } | | [ x 1 ] A 1 | , | [ x 1 ] A 2 [ x 1 ] { d } | | [ x 1 ] A 2 | = 1 ,
S u p p ( r x 6 ) = S u p p ( r x 7 ) = min | [ x 6 ] A 1 [ x 6 ] { d } | | U | , | [ x 6 ] A 2 [ x 6 ] { d } | | U | = min 4 7 , 2 7 = 2 7 ,
C e r ( r x 6 ) = C e r ( r x 7 ) = min | [ x 6 ] A 1 [ x 6 ] { d } | | [ x 6 ] A 1 | , | [ x 6 ] A 2 [ x 6 ] { d } | | [ x 6 ] A 2 | = 1 .
Proposition 4.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, the following properties hold:
(i) i = 1 s A i F ̲ ( ) = i = 1 s A i F ¯ ( ) = ,
(ii) i = 1 s A i F ̲ ( U ) = i = 1 s A i F ¯ ( U ) = U .
Proof. 
This proposition can be proved in an analogous manner to that of Proposition 2. □
By Proposition 4, we find that the lower and upper approximations of two special sets (empty set and universe of discourse) are the lower and upper approximations of themselves.
Proposition 5.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, for any X 1 , X 2 U and X 1 X 2 , the following properties hold:
(i) i = 1 s A i F ̲ ( X 1 ) i = 1 s A i F ̲ ( X 2 ) ,
(ii) i = 1 s A i F ¯ ( X 1 ) i = 1 s A i F ¯ ( X 2 ) .
Proof. 
This proposition can be proved in an analogous manner to that of Proposition 3. □
By Proposition 5, we can observe that the lower and upper approximations of a target concept become larger as the size of the target concept increases. Thus, if an inclusion relationship exists between two sets, their lower and upper approximations also satisfy the inclusion relationship.
Proposition 6.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, for any X U , the following properties hold:
(i) i = 1 s A i F ̲ ( X ) t = 1 s A t F + ̲ ( X ) ,
(ii) i = 1 s A i F ¯ ( X ) t = 1 s A t F + ¯ ( X ) .
Proof .
(i) Note that
x i = 1 s A i F ̲ ( X ) x x U : m ( A i ) = P , i T [ x ] A i X x x U : m ( A i ) = P , i T [ x ] A i X m ( A j ) = O , j T [ x ] A j X x t = 1 s A t F + ̲ ( X ) .
Thus, we obtain i = 1 s A i F ̲ ( X ) t = 1 s A t F + ̲ ( X ) .
(ii) We have i = 1 s A i F ¯ ( X ) = i = 1 s A i F ̲ ( X ) and t = 1 s A t F + ¯ ( X ) = t = 1 s A t F + ̲ ( X ) . Then, based on the result of (i), we have i = 1 s A i F ̲ ( X ) t = 1 s A t F + ̲ ( X ) , which leads to i = 1 s A i F ̲ ( X ) t = 1 s A t F + ̲ ( X ) . That is, i = 1 s A i F ¯ ( X ) t = 1 s A t F + ¯ ( X ) . □
In other words, Proposition 6 shows the relationships between the approximations of F + multigranulation rough sets and F multigranulation rough sets.

4.3. F Multigranulation Rough Sets and Their Induced Rules

Definition 11.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , T = { 1 , 2 , , s } be an index set, and m ( A t ) be the attitude preference of A t . Then, the F multigranulation lower and upper approximations of a subset X of U are, respectively, defined as
t = 1 s A t F ̲ ( X ) = x U : m ( A i ) = P , i T [ x ] A i X m ( A j ) = O , j T [ x ] A j X
t = 1 s A t F ¯ ( X ) = t = 1 s A t F ̲ ( X ) ,
where the symbol F represents that pessimistic attitude is held when we synthesize optimistic and pessimistic attitudes.
The pair t = 1 s A t F ̲ ( X ) , t = 1 s A t F ¯ ( X ) is referred to as the F multigranulation rough set of X with respect to the attribute sets A 1 , A 2 , , A s .
According to this definition, for any x t = 1 s A t F ̲ ( [ x ] { d } ) , it induces the following “AND–OR” decision rule from a attitude preference decision system:
r x : m ( A i ) = P , i T d e s ( [ x ] A i ) m ( A j ) = O , j T d e s ( [ x ] A j ) d e s ( [ x ] { d } ) .
Obviously, at least one decision rule m ( A i ) = P , i T d e s ( [ x ] A i ) d e s ( [ x ] A j ) d e s ( [ x ] { d } ) is true, where m ( A j ) = O , j T .
Moreover, the support factor of r x is defined as
S u p p m ( A i ) = P , i T d e s ( [ x ] A i ) m ( A j ) = O , j T d e s ( [ x ] A j ) d e s ( [ x ] { d } ) = min ( α , β ) ,
where α = min | [ x ] A i [ x ] { d } | | U | : m ( A i ) = P and β = max | [ x ] A j [ x ] { d } | | U | : m ( A j ) = O .
Moreover, the certainty factor of this rule is defined as
C e r m ( A i ) = P , i T d e s ( [ x ] A i ) m ( A j ) = O , j T d e s ( [ x ] A j ) d e s ( [ x ] { d } ) = min ( ν , ξ ) ,
where ν = min | [ x ] A i [ x ] { d } | | [ x ] A i | : m ( A i ) = P and ξ = max | [ x ] A j [ x ] { d } | | [ x ] A j | : m ( A j ) = O .
Example 7.
Continued from Example 5. We have
{ A 1 + A 2 + A 3 + A 4 } ̲ F ( { x 1 , x 2 , x 3 } ) = { x 1 , x 2 }
a n d { A 1 + A 2 + A 3 + A 4 } ̲ F ( { x 4 , x 5 , x 6 , x 7 } ) = { x 6 } .
Then, we obtain the following “AND–OR” decision rules from the attitude preference decision system S in Table 6 using F multigranulation rough sets:
r x 1 : ( S n e e z e , N o ) ( T e m p e r a t u r e , N o r m a l ) ( H e a d a c h e , N o ) ( C o u g h , N e v e r ) ( F l u , N o ) ; r x 2 : ( S n e e z e , N o ) ( T e m p e r a t u r e , N o r m a l ) ( H e a d a c h e , N o ) ( C o u g h , S o m e t i m e s ) ( F l u , N o ) ; r x 6 : ( S n e e z e , Y e s ) ( T e m p e r a t u r e , H i g h ) ( H e a d a c h e , S e r i o u s ) ( C o u g h , F r e q u e n t l y ) ( F l u , Y e s ) .
Moreover, we calculate
S u p p ( r x 1 ) = min min | [ x 1 ] A 1 [ x 1 ] { d } | | U | , | [ x 1 ] A 2 [ x 1 ] { d } | | U | , max | [ x 1 ] A 3 [ x 1 ] { d } | | U | , | [ x 1 ] A 4 [ x 1 ] { d } | | U | = min min 3 7 , 2 7 , max 2 7 , 1 7 = 2 7 ,
C e r ( r x 1 ) = min min | [ x 1 ] A 1 [ x 1 ] { d } | | [ x 1 ] A 1 | , | [ x 1 ] A 2 [ x 1 ] { d } | | [ x 1 ] A 2 | , max | [ x 1 ] A 3 [ x 1 ] { d } | | [ x 1 ] A 3 | , | [ x 1 ] A 4 [ x 1 ] { d } | | [ x 1 ] A 4 | = 1 .
Likewise, we compute
S u p p ( r x 2 ) = min min 3 7 , 2 7 , max 2 7 , 2 7 = 2 7 a n d   C e r ( r x 2 ) = 1 ,
S u p p ( r x 6 ) = min min 4 7 , 2 7 , max 1 7 , 2 7 = 2 7 a n d   C e r ( r x 6 ) = 1 .
Proposition 7.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, the following properties hold:
(i) t = 1 s A t F ̲ ( ) = t = 1 s A t F ¯ ( ) = ,
(ii) t = 1 s A t F ̲ ( U ) = t = 1 s A t F ¯ ( U ) = U .
Proof. 
This proposition can be proved in an analogous manner to that of Proposition 2. □
By Proposition 7, we find that the lower and upper approximations of two special sets (empty set and universe of discourse) are the lower and upper approximations of themselves.
Proposition 8.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, for any X 1 , X 2 U and X 1 X 2 , the following properties hold:
(i) t = 1 s A t F ̲ ( X 1 ) t = 1 s A t F ̲ ( X 2 ) ,
(ii) t = 1 s A t F ¯ ( X 1 ) t = 1 s A t F ¯ ( X 2 ) .
Proof. 
This proposition can be proved in an analogous manner to that of Proposition 3. □
By Proposition 8, we can observe that the lower and upper approximations of a target concept become larger as the size of the target concept increases. Thus, if an inclusion relationship exists between two sets, their lower and upper approximations also satisfy the inclusion relationship.
Proposition 9.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, for any X U , the following properties hold:
(i) t = 1 s A t F ̲ ( X ) t = 1 s A t F ̲ ( X ) ,
(ii) t = 1 s A t F ¯ ( X ) t = 1 s A t F ¯ ( X ) .
Proof. 
This proposition can be proved in an analogous manner to that of Proposition 6. □
That is to say, Proposition 10 shows the relationships between the approximations of F multigranulation rough sets and F multigranulation rough sets.

4.4. A Comparative Study of F Multigranulation Rough Sets and Q Multigranulation Rough Sets

To facilitate the subsequent discussion, we denote optimistic and pessimistic multigranulation rough sets (see Qian et al. [16,17] for details) by Q + multigranulation rough sets and Q multigranulation rough sets, respectively.
(i)
F multigranulation rough sets will be degenerated into Q multigranulation rough sets in some cases.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , T = { 1 , 2 , , s } be an index set, and m ( A t ) be the attitude preference of A t . If, for any t T , we have m ( A t ) = P , then F multigranulation rough sets can be viewed as Q multigranulation rough sets. Otherwise, if, for any t T , we have m ( A t ) = O , then F multigranulation rough sets can be viewed as Q + multigranulation rough sets.
Proposition 10.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. Then, for any X U , the following properties hold:
(1) t = 1 s A t Q ̲ ( X ) t = 1 s A t F ̲ ( X ) t = 1 s A t F ̲ ( X ) t = 1 s A t F + ̲ ( X ) t = 1 s A t Q + ̲ ( X ) ,
(2) t = 1 s A t Q ¯ ( X ) t = 1 s A t F ¯ ( X ) t = 1 s A t F ¯ ( X ) t = 1 s A t F + ¯ ( X ) t = 1 s A t Q + ¯ ( X ) .
Proof .
(1) At first, we prove t = 1 s A t Q ̲ ( X ) t = 1 s A t F ̲ ( X ) . Note that
x t = 1 s A t Q ̲ ( X ) x x U : t T [ x ] A t X x x U : m ( A i ) = P , i T [ x ] A i X m ( A j ) = O , j T [ x ] A j X x t = 1 s A t F ̲ ( X ) .
Thus, we obtain t = 1 s A t Q ̲ ( X ) t = 1 s A t F ̲ ( X ) .
Secondly, we prove t = 1 s A t F + ̲ ( X ) t = 1 s A t Q + ̲ ( X ) . Note that
x t = 1 s A t F + ̲ ( X ) x x U : m ( A i ) = P , i T [ x ] A i X m ( A j ) = O , j T [ x ] A j X x x U : t T [ x ] A t X x t = 1 s A t Q + ̲ ( X ) .
Thus, we have t = 1 s A t F + ̲ ( X ) t = 1 s A t Q + ̲ ( X ) .
Combining the above results with (i) of Propositions 6 and 9, we complete the proof of the first item.
(2) We prove t = 1 s A t Q ¯ ( X ) t = 1 s A t F ¯ ( X ) . Then, we have t = 1 s A t Q ¯ ( X ) = t = 1 s A t Q ̲ ( X ) and t = 1 s A t F ¯ ( X ) = t = 1 s A t F ̲ ( X ) . Since t = 1 s A t Q ̲ ( X ) t = 1 s A t F ̲ ( X ) based on the conclusion obtained in (i), it follows that t = 1 s A t Q ̲ ( X ) t = 1 s A t F ̲ ( X ) . That is, t = 1 s A t Q ¯ ( X ) t = 1 s A t F ¯ ( X ) .
Secondly, we prove t = 1 s A t F + ¯ ( X ) t = 1 s A t Q + ¯ ( X ) . Then, we have t = 1 s A t F + ¯ ( X ) = t = 1 s A t F + ̲ ( X ) and t = 1 s A t Q + ¯ ( X ) = t = 1 s A t Q + ̲ ( X ) . Since t = 1 s A t F + ̲ ( X ) t = 1 s A t Q + ̲ ( X ) based on the conclusion obtained in (i), it follows that t = 1 s A t F + ̲ ( X ) t = 1 s A t Q + ̲ ( X ) . That is, j = 1 s A j F + ¯ ( X ) j = 1 s A j Q + ¯ ( X ) .
Combining the above results with (ii) of Propositions 6 and 9, we complete the proof of the second item. □
(ii)
F multigranulation rough sets are generalized models of Q multigranulation rough sets.
Let S be an attitude preference information system, A 1 , A 2 , , A s A T , and T = { 1 , 2 , , s } be an index set. If we do not have the same attitude preference on A 1 , A 2 , , A s , then neither pessimistic multigranulation rough sets nor optimistic multigranulation rough sets can deal with the information fusion problem, but F multigranulation rough sets can be effective by synthesizing optimistic and pessimistic attitude preferences.
(iii)
F multigranulation rough sets have something to do with more complicated concepts.
It has been pointed out that pessimistic multigranulation rough sets are related to formal concepts and optimistic multigranulation rough sets are related to object-oriented formal concepts. However, F multigranulation rough sets have something in common with A F S formal concepts [41] in knowledge discovery.
(iv)
F multigranulation rough sets use a more flexible logic compared
to Q multigranulation rough sets.
As is well known, pessimistic multigranulation rough sets concern definable granules and optimistic multigranulation rough sets concern definable granules, while F multigranulation rough sets are able to study ( , ) definable granules [42]. In other words, F multigranulation rough sets adopt a more flexible logic.

4.5. An Illustrative Application Case

Example 8.
Table 8 describes the development levels of 15 cities by using 11 indicators, which are economic development level (a), innovation capability (b), green development (c), urban ecological environment status (d), urban sustainable development status (e), residents’ quality of life (f), urban planning (g), basic design and construction (h), ecological environment protection (i), public service facilities (j), and public service level (k). The last column represents the level of urban development (DL), including high, medium, and low, abbreviated as H, M, and L.
After determining the attitudes of each indicator, the decision rules that determine the level of urban development can be obtained based on the rough set model and method established earlier. Due to the detailed discussion in the previous text, it will not be repeated here.

5. Conclusions

In this section, we draw some conclusions to show the main contributions of our paper and provide an outlook for further study.
(i) A brief summary of our study
Different people may behave differently when they face different types of costs. In this study, we take two kinds of attitude preferences into consideration for multi-source information fusion, i.e., optimist and pessimist. More specifically, we first establish four kinds of attitude preference information systems to discuss the evaluation of attitude preferences. Then, we propose F multigranulation rough sets to synthesize optimistic and pessimistic attitude preferences, including three subtypes of multigranulation rough sets, i.e., F + multigranulation rough sets, F multigranulation rough sets, and F multigranulation rough sets. In addition, we generate new types of decision rules based on F multigranulation rough sets, which are further shown to be different from the so-called “AND” decision rules in pessimistic multigranulation rough sets and “OR” decision rules in optimistic multigranulation rough sets. Finally, the relationship between the proposed multigranulation rough sets and concept lattices is analyzed from the viewpoints of differences and relations between rules.
(ii) The differences and similarities between our study and the existing ones
In what follows, we mainly distinguish the differences between our study and the existing ones with respect to the granulation environment and research objective.
  • Our work is different from that in [43] as far as the granulation environment is concerned. In fact, our comparative study was conducted under a multigranulation environment, while that in [43] was conducted under a single-granulation environment.
  • Our research is different from that in [39] in terms of the research objective. More specifically, the current study evaluates attitude preference, while the aim of Ref. [39] was to compute the test cost. In addition, the range of attitude preference is [ 1 , 1 ] , while that of the test cost is always larger than 0.
In addition to the above differences, there are also similarities between our contribution and the existing ones. For example, F multigranulation rough sets can be viewed as a generalization of optimistic multigranulation rough sets and pessimistic multigranulation rough sets.
(iii) An outlook for further study
It should be pointed out that, in practice, not all the attributes in an attitude preference decision system are necessary for F multigranulation approximations. So, attribute reduction with regard to F multigranulation approximations should be studied in the future.
Moreover, the attitude preferences are definitive in this study. That is, the attitude preferences are all known in advance. But, there may exist an interesting and complicated occasion. That is, the attitude preferences may not be known due to our ignorance or other reasons (see Table 9 for an example). Then, the proposed methods would not be effective and we would have to compute attitude preferences first.
Moreover, although it is well known that rough set theory is related to formal concept analysis, and vice versa, there are no concept lattice models related to F multigranulation rough sets. So, an important and appealing problem is to explore new concept lattice models that can provide the same amount of information as F multigranulation rough sets.

Author Contributions

Conceptualization, H.W. and H.Z.; methodology, H.Z.; software, H.Z.; validation, Y.L., D.Z. and J.X.; formal analysis, H.Z.; investigation, H.W.; resources, J.X.; data curation, H.Z.; writing—original draft preparation, H.W. and H.Z.; writing—review and editing, H.W. and H.Z.; visualization, H.Z.; supervision, Y.L.; project administration, H.Z.; funding acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of Fujian Provincial Science and Technology Department under Grants No. 2024J01793, No. 2023H6034, and No. 2021H6037, in part by the Key Project of Quanzhou Science and Technology Plan under Grant No. 2021C008R, and in part by the sixth batch of Quanzhou City’s introduction of high-level talent team projects under Grant No. 2.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  2. Ji, X.; Duan, W.; Peng, J.; Yao, S. Fuzzy rough set attribute reduction based on decision ball model. Int. J. Approx. Reason. 2025, 179, 109364. [Google Scholar] [CrossRef]
  3. Yao, Y.Y. Three-way decision and granular computing. International J. Approx. Reason. 2018, 103, 107–123. [Google Scholar] [CrossRef]
  4. Hu, Q.H.; Yu, D.R.; Liu, J.F.; Wu, C.X. Neighborhood rough set based heterogeneous feature subset selection. Inf. Sci. 2008, 178, 3577–3594. [Google Scholar] [CrossRef]
  5. Jensen, R.; Shen, Q. Fuzzy-rough sets assisted attribute selection. IEEE Trans. Fuzzy Syst. 2007, 15, 73–89. [Google Scholar] [CrossRef]
  6. Liu, D.; Li, T.R.; Zhang, J.B. A rough set-based incremental approach for learning knowledge in dynamic incomplete information systems. Int. J. Approx. Reason. 2014, 55, 1764–1786. [Google Scholar] [CrossRef]
  7. Li, T.R.; Ruan, D.; Wets, G.; Song, J.; Xu, Y. A rough sets based characteristic relation approach for dynamic attribute generalization in data mining. Knowl.-Based Syst. 2007, 20, 485–494. [Google Scholar] [CrossRef]
  8. Tsang, E.C.C.; Chen, D.G.; Yeung, D.S.; Wang, X.Z.; Lee, J.W.T. Attributes reduction using fuzzy rough sets. IEEE Trans. Fuzzy Syst. 2008, 16, 1130–1141. [Google Scholar] [CrossRef]
  9. Du, Y.; Hu, Q.H.; Zhu, P.F.; Ma, P.J. Rule learning for classification based on neighborhood covering reduction. Inf. Sci. 2011, 181, 5457–5467. [Google Scholar] [CrossRef]
  10. Zhao, S.Y.; Tsang, E.C.C.; Chen, D.G. The model of fuzzy variable rough sets. IEEE Trans. Fuzzy Syst. 2009, 17, 451–467. [Google Scholar] [CrossRef]
  11. Zadeh, L.A. Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst. 1997, 90, 111–117. [Google Scholar] [CrossRef]
  12. Chen, B.; Yuan, Z.; Peng, D.; Chen, X.; Chen, H.; Chen, Y. Integrating granular computing with density estimation for anomaly detection in high-dimensional heterogeneous data. Inf. Sci. 2025, 690, 121566. [Google Scholar] [CrossRef]
  13. Pinheiro, G.; Minz, S. Granular computing based segmentation and textural analysis (GrCSTA) framework for object-based LULC classification of fused remote sensing images. Appl. Intell. 2024, 54, 5748–5767. [Google Scholar] [CrossRef]
  14. Salehi, S.; Selamat, A.; Fujita, H. Systematic mapping study on granular computing. Knowl.-Based Syst. 2015, 80, 78–97. [Google Scholar] [CrossRef]
  15. Wu, W.Z.; Leung, Y. Theory and applications of granular labelled partitions in multi-scale decision tables. Inf. Sci. 2011, 181, 3878–3897. [Google Scholar] [CrossRef]
  16. Qian, Y.H.; Liang, J.Y.; Yao, Y.Y.; Dang, C.Y. MGRS: A multi-granulation rough set. Inf. Sci. 2010, 180, 949–970. [Google Scholar] [CrossRef]
  17. Qian, Y.H.; Li, S.Y.; Liang, J.Y.; Shi, Z.Z.; Wang, F. Pessimistic rogh set based decisions: A multigranulation fusion strategy. Inf. Sci. 2014, 264, 196–210. [Google Scholar] [CrossRef]
  18. Lin, G.P.; Qian, Y.H.; Li, J.J. NMGRS: Neighborhood-based multigranulation rough sets. Int. J. Approx. Reason. 2012, 53, 1080–1093. [Google Scholar] [CrossRef]
  19. Liu, C.H.; Miao, D.Q.; Qian, J. On multi-granulation covering rough sets. Int. J. Approx. Reason. 2014, 55, 1404–1418. [Google Scholar] [CrossRef]
  20. She, Y.H.; He, X.L. On the structure of the multigranulation rough set model. Knowl.-Based Syst. 2012, 36, 81–92. [Google Scholar] [CrossRef]
  21. Yang, X.B.; Qi, Y.S.; Song, X.N.; Yang, J.Y. Test cost sensitive multigranulation rough set: Model and minimal cost selection. Inf. Sci. 2013, 250, 184–199. [Google Scholar] [CrossRef]
  22. Tan, A.H.; Li, J.J.; Lin, G.P. Evidence-theory-based numerical characterization of multigranulation rough sets in incomplete information systems. Fuzzy Sets Syst. 2016, 294, 18–35. [Google Scholar] [CrossRef]
  23. Qian, Y.H.; Liang, X.Y.; Lin, G.P.; Guo, Q.; Liang, J.Y. Local multigranulation decision-theoretic rough sets. Int. J. Approx. Reason. 2017, 82, 119–137. [Google Scholar] [CrossRef]
  24. She, Y.H.; He, X.L.; Shi, H.X.; Qian, Y.H. A multiple-valued logic approach for multigranulation rough set model. Int. J. Approx. Reason. 2017, 82, 270–284. [Google Scholar] [CrossRef]
  25. Deng, X.F.; Yao, Y.Y. Decision-theoretic three-way approximations of fuzzy sets. Inf. Sci. 2014, 279, 702–715. [Google Scholar] [CrossRef]
  26. Hu, B.Q. Three-way decisions space and three-way decisions. Inf. Sci. 2014, 281, 21–52. [Google Scholar] [CrossRef]
  27. Li, H.X.; Zhang, L.B.; Huang, B.; Zhou, X.Z. Sequential three-way decision and granulation for cost-sensitive face recognition. Knowl.-Based Syst. 2016, 91, 241–251. [Google Scholar] [CrossRef]
  28. Liang, D.C.; Liu, D. Deriving three-way decisions from intuitionistic fuzzy decision-theoretic rough sets. Inf. Sci. 2015, 300, 28–48. [Google Scholar] [CrossRef]
  29. Wang, X.; Zhang, X. Linear-combined rough vague sets and their three-way decision modeling and uncertainty measurement optimization. Int. J. Mach. Learn. Cybern. 2023, 14, 3827–3850. [Google Scholar] [CrossRef]
  30. Qian, Y.H.; Zhang, H.; Sang, Y.L.; Liang, J.Y. Multigranulation decision-theoretic rough sets. Int. J. Approx. Reason. 2014, 55, 225–237. [Google Scholar] [CrossRef]
  31. Sun, B.Z.; Ma, W.M.; Xiao, X. Three-way group decision making based on multigranulation fuzzy decision-theoretic rough set over two universes. Int. J. Approx. Reason. 2017, 81, 87–102. [Google Scholar] [CrossRef]
  32. Turney, P.D. Cost-sensitive classification: Empirical evaluation of a hybrid genetic decision tree induction algorithm. J. Artif. Intell. Res. 1995, 2, 369–409. [Google Scholar] [CrossRef]
  33. Shen, F.; Yang, Z.; Kuang, J.; Zhu, Z. Reject inference in credit scoring based on cost-sensitive learning and joint distribution adaptation method. Expert Syst. Appl. 2024, 251, 124072. [Google Scholar] [CrossRef]
  34. Zhang, S.; Xie, L.; Chen, Y.; Zhang, S. Inter-class margin climbing with cost-sensitive learning in neural network classification. Knowl. Inf. Syst. 2025, 67, 1993–2016. [Google Scholar] [CrossRef]
  35. Yang, Q.; Ling, C.X.; Chai, X.; Pan, R. Test-cost sensitive classification on data with missing values. IEEE Trans. Knowl. Data Eng. 2006, 18, 626–638. [Google Scholar] [CrossRef]
  36. Du, J.; Cai, Z.; Ling, C.X. Cost-sensitive decision trees with pre-pruning. In Advances in Artificial Intelligence; Canadian AI 2007, LNAI 4509; Kobti, Z., Wu, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 171–179. [Google Scholar]
  37. Sheng, V.S.; Ling, C.X.; Ni, A.; Zhang, S. Cost-sensitive test strategies. In Proceedings of the 21st AAAI Conference on Artificial Intelligence, Boston, MA, USA, 16–20 July 2006; pp. 482–487. [Google Scholar]
  38. Ling, C.X.; Sheng, V.S.; Yang, Q. Test strategies for cost-sensitive decision trees. IEEE Trans. Knowl. Data Eng. 2006, 18, 1055–1067. [Google Scholar] [CrossRef]
  39. Min, F.; Liu, Q.H. A hierarchical model for test-cost-sensitive decision systems. Inf. Sci. 2009, 179, 2442–2452. [Google Scholar] [CrossRef]
  40. Xu, C.; Min, F. Weighted reduction for decision tables. In Fuzzy Systems and Knowledge Discovery; FSKD 2006, LNCS 4223; Wang, L., Jiao, L., Shi, G., Li, X., Liu, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 246–255. [Google Scholar]
  41. Wang, L.D.; Liu, X.D. Concept analysis via rough set and AFS algebra. Inf. Sci. 2008, 178, 4125–4137. [Google Scholar] [CrossRef]
  42. Zhi, H.L.; Li, J. Granule description based on formal concept analysis. Knowl.-Based Syst. 2016, 104, 62–73. [Google Scholar] [CrossRef]
  43. Kent, R.E. Rough concept analysis: A synthesis of rough sets and formal concept analysis. Fundam. Inform. 1996, 27, 169–181. [Google Scholar] [CrossRef]
Figure 1. The connections between four types of information systems.
Figure 1. The connections between four types of information systems.
Mathematics 13 01367 g001
Table 1. An attribute preference degree function v.
Table 1. An attribute preference degree function v.
ATMoneyLaborTimeMemoryBandwidth
v ( a ) 0.90.80.7−0.8−0.9
Table 2. A group-membership function.
Table 2. A group-membership function.
AT MoneyLaborTimeMemoryBandwidth
g ( a ) 12233
Table 3. A group common attribute preference degree function.
Table 3. A group common attribute preference degree function.
k123
g c ( k ) 0.10.2−0.25
Table 4. A complex group-membership function.
Table 4. A complex group-membership function.
k123
g r ( k ) Money, LaborLabor, TimeMemory, Bandwidth
Table 5. A complex group-membership function.
Table 5. A complex group-membership function.
MoneyLaborTimeMemoryBandwidth
G 1 11000
G 2 01100
G 3 00011
Table 6. A flu dataset.
Table 6. A flu dataset.
PersonSneezeTemperatureHeadacheCoughFlu
x 1 NoNormalNoNeverNo
x 2 NoNormalNoSometimesNo
x 3 NoSlightly highA littleSometimesNo
x 4 YesSlightly highA littleSometimesYes
x 5 YesSlightly highA littleFrequentlyYes
x 6 YesHighSeriousFrequentlyYes
x 7 YesHighA littleSometimesYes
Table 7. An attitude preference vector.
Table 7. An attitude preference vector.
A A 1 A 2 A 3 A 4
m ( A ) PPOO
Table 8. An information system with 15 objects.
Table 8. An information system with 15 objects.
abcdefghijkDL
1HLHLHLHLHLHM
2HHLHHHLMHHHM
3LHLLLHHLLHHM
4MHLHHLLHHLLM
5LHLMLHiHHLHHL
6HHLLLLLLHHLL
7HHHHHHMHHHHH
8LHLHLHHHLHHL
9HHHLHHLLHHLH
10LHLHHLHHLLHL
11LHLLLLLLLMLM
12HHHLHHHLHHML
13LMLHHHMHLHHM
14LLLHLLLHLLLL
15LHHHLHMHLHLM
Table 9. An incomplete attitude preference vector.
Table 9. An incomplete attitude preference vector.
A A 1 A 2 A 3 A 4
m ( A ) P??O
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Zhi, H.; Li, Y.; Zhu, D.; Xiahou, J. A Generalized Multigranulation Rough Set Model by Synthesizing Optimistic and Pessimistic Attitude Preferences. Mathematics 2025, 13, 1367. https://doi.org/10.3390/math13091367

AMA Style

Wang H, Zhi H, Li Y, Zhu D, Xiahou J. A Generalized Multigranulation Rough Set Model by Synthesizing Optimistic and Pessimistic Attitude Preferences. Mathematics. 2025; 13(9):1367. https://doi.org/10.3390/math13091367

Chicago/Turabian Style

Wang, Hongwei, Huilai Zhi, Yinan Li, Daxin Zhu, and Jianbing Xiahou. 2025. "A Generalized Multigranulation Rough Set Model by Synthesizing Optimistic and Pessimistic Attitude Preferences" Mathematics 13, no. 9: 1367. https://doi.org/10.3390/math13091367

APA Style

Wang, H., Zhi, H., Li, Y., Zhu, D., & Xiahou, J. (2025). A Generalized Multigranulation Rough Set Model by Synthesizing Optimistic and Pessimistic Attitude Preferences. Mathematics, 13(9), 1367. https://doi.org/10.3390/math13091367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop