Next Article in Journal
Chen Inequalities for Warped Product Pointwise Bi-Slant Submanifolds of Complex Space Forms and Its Applications
Previous Article in Journal
Common Fixed Point Theorem via Cyclic (α,β)–(ψ,φ)S-Contraction with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncertainty Measurement for a Set-Valued Information System: Gaussian Kernel Method

Key Laboratory of Complex System Optimization and Big Data Processing in Department of Guangxi Education, Yulin Normal University, Yulin 537000, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(2), 199; https://doi.org/10.3390/sym11020199
Submission received: 16 December 2018 / Revised: 3 January 2019 / Accepted: 24 January 2019 / Published: 11 February 2019

Abstract

:
A set-valued information system (SIS) is the generalization of a single-valued information system. This article explores uncertainty measurement for a SIS by using Gaussian kernel. The fuzzy T c o s -equivalence relation lead by a SIS is first obtained by using Gaussian kernel. Then, information structures in this SIS are described by set vectors. Next, dependence between information structures is presented and properties of information structures are investigated. Lastly, uncertainty measures of a SIS are presented by using its information structures. Moreover, effectiveness analysis is done to assess the feasibility of our presented measures. The consequence of this article will help us understand the intrinsic properties of uncertainty in a SIS.

1. Introduction

1.1. Research Background and Related Works

Granular computing (GrC) as a fundamental issue in knowledge representation and data mining was presented by Zadeh [1,2,3,4]. Information granulation, organization and causation are basic notions of GrC. Information granule is a collection of objects that are drawn together by some constraints. The process of building information granules is referred to as information granulation. Information granulation makes a universe into a family of disjoint information granules. Granular structure is the collection of information granules where the internal structure of each information granule is visible as a sub-structure. Lin [5,6,7] and Yao [8,9,10] illustrated the importance of GrC, which aroused people’s interest in it. As yet, the study on GrC has four methods, i.e., rough set theory (RST) [11], fuzzy set theory [12], concept lattice [13,14] and quotient space [15].
RST is an effective tool to manage uncertainty. An information system (IS) on the basis of RST was presented by Pawlak [11,16,17,18,19]. Many applications of RST, for instance, uncertainty modeling [20,21,22,23], reasoning with uncertainty [8,24,25], rule extraction [26,27,28,29], classification and feature selection [30,31,32,33,34] are related to ISs.
In GrC in an IS, the study of information structures is a significant research topic. An equivalence relation is a peculiar similarity between two objects from a dataset. Each attribute subset can determine an equivalence relation which partitions the object set into some disjoint classes. These disjoint classes are addressed as equivalence classes and each of them may be regarded as an information granule consisting of indistinguishable objects [26]. The collection of all these information granules constitutes an information structure by means of set vector in the given IS induced by this attribute subset.
Uncertainty measurement is an important issue in some fields, such as machine learning [35], pattern recognition [36,37], image processing [38], medical diagnosis [39] and data mining [40]. Some scholars have done some exploration in this aspect. For example, Yao et al. [9] presented a granularity measure on the viewpoint of granulation; Wierman [29] provided measures of uncertainty and granularity in RST; Bianucci et al. [41,42] explored entropy and co-entropy approaches for uncertainty measurements of coverings; Yao [25] studied several types of information-theoretical measures for attribute importance in RST; Beaubouef et al. [43] proposed a method for measuring the uncertainty of rough sets. Liang et al. [44,45] investigated information granulation in complete information systems; Dai et al. [46] researched entropy and granularity measures for SISs; Qian et al. [47,48] presented the axiomatic definition of information granulation in a knowledge base and examined information granularity of a fuzzy relation by using its fuzzy granular structure; Xu et al. [49] considered knowledge granulation in ordered information systems; Dai et al. [50] studied the uncertainty of incomplete interval-valued information systems based on α -weak similarity; Xie et al. [51] put forward new uncertainty measurement for an interval-valued information system; Zhang et al. [52] measured the uncertainty of a fully fuzzy information system.

1.2. Motivation and Inspiration

A SIS has uncertainty. How to search for uncertainty measures in a SIS is a meaningful research issue. However, until now, the study of uncertainty measurement for a SIS has not been reported. The purpose of this article is to address uncertainty measurements in a SIS by using its information structures. Information granule of each object in a given SIS is first constructed by means of Gaussian kernel. Like this, information structures is consequently proposed. The uncertainty of this SIS is measured by using its information structure. For the sake of evaluating the performance of our presented measurement, effectiveness analysis is performed by means of elementary statistical methods.
Why do we investigate uncertainty measurement for a SIS? This is because a SIS itself has uncertainty. Why do we use information structures to measure the uncertainty of SISs? This is because it is hard to compare the size of measure values of uncertainty for SISs. Moreover, if dependence between two information structures is obtained, then the size of measure values of uncertainty for SISs can be compared by using the dependence.
The remaining sections of this article proceed as follows: Section 2 reviews some notions about fuzzy sets, fuzzy relations and SISs. Section 3 proposes distance between two objects in a SIS. Section 4 obtains the fuzzy T c o s -equivalence relation lead by a SIS by using Gaussian kernel. Section 5 investigates information structures in a SIS. Section 6 gives some tools for assessing uncertainty of a SIS. Section 7 summarizes this article.

2. Preliminaries

Some notions about fuzzy sets, fuzzy relations and SISs are reviewed.
Throughout this article, U denotes a finite set. 2 U indicates the family of all subsets of U. I expresses the unit interval [ 0 , 1 ] .
Put
U = { u 1 , u 2 , , u n } .

2.1. Fuzzy Sets and Fuzzy Relations

Fuzzy sets are extensions of ordinary sets [12]. A fuzzy set P in U is addressed as a function assigning to each element u of U a value P ( u ) I and P ( u ) is referred to as the membership degree of u to the fuzzy set P.
In this article, I U denotes the set of all fuzzy sets in U. The cardinality of P I U can be calculated with
| P | = i = 1 n P ( u i ) .
If R is a fuzzy set in U × U , then R is referred to as a fuzzy relation on U. In this article, I U × U expresses the set of all fuzzy relations on U.
Let R I U × U . Then R may be represented by
M ( R ) = ( R ( u i , u j ) ) n × n ,
where R ( u i , u j ) expresses the similarity between u i and u j .
If M ( R ) is an unit matrix, then R is said to be a fuzzy identity relation, and which is written as R = ; if r i j = 1 , i , j n , then R is said to be a fuzzy universal relation, which is written as R = ω .
Let R I U × U . u U , a fuzzy set S R ( u ) is addressed as
S R ( u ) ( v ) = R ( u , v ) .
Then S R ( u ) can be viewed as the information granule of the point u [48].
Definition 1 
([53]). A function T : I 2 I is called a t-norm, if it satisfies:
(1) 
Commutativity: T ( a , b ) = T ( b , a ) ;
(2) 
Associativity: T ( T ( a , b ) , c ) = T ( a , T ( b , c ) ) ;
(3) 
Monotonicity: a c , b d = T ( a , b ) T ( c , d ) ;
(4) 
Boundary condition: T ( a , 1 ) = a .
Example 1.
For any u , v U , denote
T c o s ( u , v ) = m a x { u v 1 u 2 1 v 2 , 0 } .
Then T c o s is a t-norm.
Definition 2 
([54]). Suppose that T is a t-norm. Suppose R I U × U . Then R is a fuzzy T-equivalence relation on U if it satisfies:
(1) 
Reflexivity: R ( u , u ) = 1 ;
(2) 
Symmetry: R ( u , v ) = R ( v , u ) ;
(3) 
T-transitivity: T ( R ( u , v ) , R ( v , w ) ) R ( u , w ) .
Proposition 1 
([55]). Suppose that f : U × U I satisfies f ( u , u ) = 1 for all u U . Then u , v , w U ,
T c o s ( f ( u , v ) , f ( v , w ) ) f ( u , w ) .
Corollary 1.
Given R I U × U . If R is reflexive, then R is T c o s -transitive.

2.2. Set-Valued Information Systems

Definition 3 
([11]). Given that U is an object set of s and A is an attribute set. Suppose that U and A are finite sets. Then ( U , A ) is referred to as an IS, if a A is able to determine an information function a : U V a , where V a = { a ( u ) : u U } .
If P A , then ( U , P ) is referred to as a subsystem of ( U , A ) .
Definition 4 
([56]). Let ( U , A ) be an IS. If any a A and u U , a ( u ) is a set, then ( U , A ) is referred to as a set-valued information system SIS.
If P A , then ( U , P ) is referred to as a subsystem of ( U , A ) .
Example 2.
Table 1 expresses a SIS ( U , A ) where U = { u 1 , u 2 , , u 10 } and A = { a 1 , a 2 , a 3 , a 4 } .
Denote
P i = { a 1 , , a i } ( i = 1 , 2 , 3 , 4 ) .
Then A = P 4 .

3. The Distance between Two Objects in a SIS

Definition 5 
([54]). Suppose that ( U , A ) is a SIS. u , v U , a A , the distance between a ( u ) and a ( v ) is addressed as
d ( a ( u ) , a ( v ) ) = 1 | a ( u ) a ( v ) | M a .
where M a = m a x { a ( u ) : u U } .
According to the above definition, the distance between two objects in a SIS is given in the following.
Definition 6. 
Assume that ( U , A ) is a SIS. Given P A . u , v U , the distance between u and v in ( U , P ) is addressed as
D P ( u , v ) = a P d 2 ( a ( u ) , a ( v ) ) .
Proposition 2. 
Let ( U , A ) be a SIS. Given P A . Then u , v U ,
0 D P ( u , v ) | P | .
Proof. 
a P , u , v U , 0 d ( a ( u ) , a ( v ) ) 1 .
Then
u , v U , 0 a P d 2 ( a ( u ) , a ( v ) ) | P | .
Thus
u , v U , 0 D P ( u , v ) | P | .
 □
Example 3. 
Calculate d A ( u 2 , u 4 ) in Table 1. We have
d ( a 1 ( u 2 ) , a 1 ( u 4 ) ) = 1 | a 1 ( u 2 ) a 1 ( u 4 ) | M a = 1 0 = 1 ;
d ( a 2 ( u 2 ) , a 2 ( u 4 ) ) = 1 | a 2 ( u 2 ) a 2 ( u 4 ) | M a = 1 2 3 0.3333 ;
d ( a 3 ( u 2 ) , a 3 ( u 4 ) ) = 1 | a 3 ( u 2 ) a 3 ( u 4 ) | M a = 1 1 = 0 ;
d ( a 4 ( u 2 ) , a 4 ( u 4 ) ) = 1 | a 4 ( u 2 ) a 4 ( u 4 ) | M a = 1 1 3 0.6667 .
Then
d A ( u 2 , u 4 ) = a A d 2 ( a ( u 2 ) , a ( u 4 ) ) = 1 2 + 0.3333 2 + 0 2 + 0.6667 2 1.2472 .

4. The Fuzzy T cos -Equivalence Relation Induced by a SIS

A Gaussian kernel makes data linear and simplifies classification tasks [57,58]. Hu et al. [59,60] established relations between rough sets and Gaussian kernel, so fuzzy relations are obtained by the Gaussian kernel. In this section, a fuzzy T c o s -equivalence relation on the object set of a SIS is extracted by using a Gaussian kernel.
Gaussian kernel G ( u , v ) = e x p ( u v 2 2 δ 2 ) computes similarity between objects u and v, where u v is the Euclidean distance between u and v, δ is a threshold. In this article, pick δ ( 0 , 1 ] .
Let ( U , A ) be a SIS. Given P A and δ ( 0 , 1 ] . Since d P ( u , v ) expresses the distance between objects u and v in ( U , P ) , u v can be replaced by d P ( u , v ) . Thus, we obtain e x p ( d P 2 ( u , v ) 2 δ 2 ) by using Gaussian kernel. Since e x p ( d P 2 ( u , v ) 2 δ 2 ) means the similarity between objects u and v in ( U , P ) , ( e x p ( d P 2 ( u i , u j ) 2 δ 2 ) n × n is a fuzzy relation on U. The specific definition is as follows.
Definition 7.
Let ( U , A ) be a SIS. Given P A and δ ( 0 , 1 ] , denote
R P G ( δ ) ( u i , u j ) = e x p ( d P 2 ( u i , u j ) 2 δ 2 ) ,
M ( R P G ( δ ) ) = ( R P G ( δ ) ( u i , u j ) ) n × n .
Then M ( R P G ( δ ) ) is said to Gaussian kernel matric of ( U , P ) relative to δ.
Theorem 1.
Let ( U , A ) be a SIS. Given P A and δ ( 0 , 1 ] . Then R P G ( δ ) is a fuzzy T c o s -equivalence relation on U.
Proof. 
This holds by Corollary 1. □
Definition 8.
Let ( U , A ) be a SIS. Given P A and δ ( 0 , 1 ] . Then R P G ( δ ) is addressed as the fuzzy T c o s -equivalence relation lead by ( U , P ) relative to δ.
Example 4. 
(Continued from Example 2) Pick δ = 0.8 , we have
M ( R P 1 G ( δ ) ) =
1.0000 0.5353 0.7575 1.0000 0.5353 0.7575 0.7575 0.5353 1.0000 1.0000 0.5353 1.0000 0.7575 0.5353 0.7575 0.7575 0.9329 0.7575 0.5353 0.5353 0.7575 0.7575 1.0000 0.7575 0.5353 0.7575 0.9329 0.7575 0.7575 0.7575 1.0000 0.5353 0.7575 1.0000 0.5353 0.7575 0.7575 0.5353 1.0000 1.0000 0.5353 0.7575 0.5353 0.5353 1.0000 0.7575 0.7575 0.5353 0.5353 0.5353 0.7575 0.7575 0.7575 0.7575 0.7575 1.0000 0.9329 0.5353 0.7575 0.7575 0.7575 0.9329 0.9329 0.7575 0.7575 0.9329 1.0000 0.7575 0.7575 0.7575 0.5353 0.7575 0.7575 0.5353 0.5353 0.5353 0.7575 1.0000 0.5353 0.5353 1.0000 0.5353 0.7575 1.0000 0.5353 0.7575 0.7575 0.5353 1.0000 1.0000 1.0000 0.5353 0.7575 1.0000 0.5353 0.7575 0.7575 0.5353 1.0000 1.0000 ,
M ( R P 2 G ( δ ) ) =
1.0000 0.4055 0.7575 0.7575 0.4055 0.4055 0.7575 0.4055 0.5353 0.7575 0.4055 1.0000 0.5737 0.4994 0.7066 0.5737 0.7066 0.5737 0.4055 0.5353 0.7575 0.5737 1.0000 0.5737 0.4055 0.4055 0.9329 0.5737 0.4055 0.5737 0.7575 0.4994 0.5737 1.0000 0.4055 0.4055 0.5737 0.5353 0.5353 0.9329 0.4055 0.7066 0.4055 0.4055 1.0000 0.5737 0.5737 0.4055 0.4055 0.4994 0.4055 0.5737 0.4055 0.4055 0.5737 1.0000 0.4994 0.2865 0.7575 0.5737 0.7575 0.7066 0.9329 0.5737 0.5737 0.4994 1.0000 0.5737 0.4055 0.5737 0.4055 0.5737 0.5737 0.5353 0.4055 0.2865 0.5737 1.0000 0.2865 0.4994 0.5353 0.4055 0.4055 0.5353 0.4055 0.7575 0.4055 0.2865 1.0000 0.7575 0.7575 0.5353 0.5737 0.9329 0.4994 0.5737 0.5737 0.4994 0.7575 1.0000 ,
M ( R P 3 G ( δ ) ) =
1.0000 0.2170 0.7575 0.4055 0.4055 0.2170 0.7575 0.2170 0.5353 0.4055 0.2170 1.0000 0.3071 0.4994 0.3782 0.5737 0.3782 0.5737 0.2170 0.5353 0.7575 0.3071 1.0000 0.3071 0.4055 0.2170 0.9329 0.3071 0.4055 0.3071 0.4055 0.4994 0.3071 1.0000 0.2170 0.4055 0.3071 0.5353 0.2865 0.9329 0.4055 0.3782 0.4055 0.2170 1.0000 0.3071 0.5737 0.2170 0.4055 0.2673 0.2170 0.5737 0.2170 0.4055 0.3071 1.0000 0.2673 0.2865 0.4055 0.5737 0.7575 0.3782 0.9329 0.3071 0.5737 0.2673 1.0000 0.3071 0.4055 0.3071 0.2170 0.5737 0.3071 0.5353 0.2170 0.2865 0.3071 1.0000 0.1534 0.4994 0.5353 0.2170 0.4055 0.2865 0.4055 0.4055 0.4055 0.1534 1.0000 0.4055 0.4055 0.5353 0.3071 0.9329 0.2673 0.5737 0.3071 0.4994 0.4055 1.0000 ,
M ( R P 4 G ( δ ) ) = M ( R A G ( δ ) ) =
1.0000 0.2170 0.5737 0.3071 0.3782 0.2025 0.7066 0.1644 0.4055 0.3071 0.2170 1.0000 0.2326 0.3782 0.3757 0.4346 0.3529 0.4346 0.1644 0.4055 0.5737 0.2326 1.0000 0.1644 0.3071 0.2170 0.7066 0.1644 0.2170 0.1644 0.3071 0.3782 0.1644 1.0000 0.1644 0.2170 0.2326 0.5353 0.2865 0.4994 0.3782 0.3757 0.3071 0.1644 1.0000 0.2326 0.5737 0.1644 0.3071 0.1431 0.2025 0.4346 0.2170 0.2170 0.2326 1.0000 0.2025 0.1534 0.2170 0.3071 0.7066 0.3529 0.7066 0.2326 0.5737 0.2025 1.0000 0.2326 0.3071 0.1644 0.1644 0.4346 0.1644 0.5353 0.1644 0.1534 0.2326 1.0000 0.1534 0.2673 0.4055 0.1644 0.2170 0.2865 0.3071 0.2170 0.3071 0.1534 1.0000 0.2170 0.3071 0.4055 0.1644 0.4994 0.1431 0.3071 0.1644 0.2673 0.2170 1.0000 .
Then R P i G ( δ ) is the fuzzy T c o s -equivalence relation induced by the subsystem ( U , P i ) with respect to δ ( i = 1 , 2 , 3 , 4 ) .
Given P A and δ ( 0 , 1 ] . Then an algorithm on the fuzzy T c o s -equivalence relation R P G ( δ ) is designed as follows (Algorithm 1).
Algorithm 1: The fuzzy T c o s -equivalence relation.
Symmetry 11 00199 i001

5. Information Structures in a SIS

In this section, information structures in a SIS are investigated.

5.1. Some Concepts of Information Structures in a SIS

Given R I U × U . Then for each i, S R ( u i ) can be viewed as the fuzzy neighborhood or the information granule of the point u i [48]. According to this view, Qian et al. [48] defined the fuzzy granular structure of R as follows:
S ( R ) = ( S R ( u 1 ) , S R ( u 2 ) , , S R ( u n ) ) .
Let ( U , A ) be a SIS. Given P A and δ ( 0 , 1 ] . Then, by Theorem 1, R P G ( δ ) is a fuzzy T c o s -equivalence relation on U. For each i, S R P G ( δ ) ( u i ) can be viewed as the fuzzy neighborhood or the information granule of the point u i . Based on Qian’s idea, we have the following definition.
Definition 9.
Let ( U , A ) be a SIS. P A and δ ( 0 , 1 ] , denote
S δ ( P ) = ( S R P G ( δ ) ( u 1 ) , S R P G ( δ ) ( u 2 ) , , S R P G ( δ ) ( u n ) ) .
Then S δ ( P ) is referred to as δ-information structure of ( U , P ) .
Example 5. 
(Continued from Example 4)
S 0.8 ( A ) = ( S R A G ( 0.8 ) ( u 1 ) , S R A G ( 0.8 ) ( u 2 ) , , S R A G ( 0.8 ) ( u 10 ) ) is 0.8 -information structure of ( U , A ) .
Definition 10.
Let ( U , A ) be a SIS. Given δ ( 0 , 1 ] . Put
S δ ( U , A ) = { S δ ( P ) : P A }
Then S δ ( U , A ) is referred to as δ-information structure base of ( U , A ) .
Definition 11.
Assume that ( U , A ) is a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . If i , S R P G ( δ 1 ) ( u i ) = S R Q G ( δ 2 ) ( u i ) , then S δ 1 ( P ) and S δ 2 ( Q ) are called the same, which is written as S δ 1 ( P ) = S δ 2 ( Q ) .
Below, dependence between information structures is proposed.
Definition 12.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A .
(1) 
S δ 2 ( Q ) is said to be dependent of S δ 1 ( P ) , if i , S R P G ( δ 1 ) ( u i ) S R Q G ( δ 2 ) ( u i ) , which is written as S δ 2 ( Q ) S δ 1 ( P ) .
(2) 
S δ 2 ( Q ) is said to be strictly dependent of S δ 1 ( P ) , if S δ 2 ( Q ) S δ 1 ( P ) and S δ 2 ( Q ) S δ 1 ( P ) , which is written as S δ 2 ( Q ) S δ 1 ( P ) .

5.2. Properties of Information Structures in a SIS

Theorem 2.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . Then
S δ 1 ( P ) = S δ 2 ( Q ) R P G ( δ 1 ) = R Q G ( δ 2 ) .
Proof. 
Obviously. □
Theorem 3.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . Then
S δ 1 ( P ) S δ 2 ( Q ) R P G ( δ 1 ) R Q G ( δ 2 ) .
Proof. 
Clearly. □
Corollary 2.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . Then
S δ 1 ( P ) S δ 2 ( Q ) R P G ( δ 1 ) R Q G ( δ 2 ) .
Proof. 
This follows from Theorems 2 and 3. □
Theorem 4.
Let ( U , A ) be a SIS.
(1) 
If 0 < δ 1 δ 2 1 , then P A , S δ 1 ( P ) S δ 2 ( P ) ;
(2) 
If P Q A , then δ ( 0 , 1 ] , S δ ( Q ) S δ ( P ) .
Proof. 
(1) For any i , j , it is clear that
e x p ( d P 2 ( u i , u j ) 2 δ 1 2 ) e x p ( d P 2 ( u i , u j ) 2 δ 2 2 ) .
Then
R P G ( δ 1 ) ( u i , u j ) R P G ( δ 2 ) ( u i , u j ) .
So
R P G ( δ 1 ) R P G ( δ 2 ) .
By Theorem 3,
S δ 1 ( P ) S δ 2 ( P ) .
(2) By Definition 7,
R P G ( δ ) ( u i , u j ) = e x p ( d P 2 ( u i , u j ) 2 δ 2 ) , R Q G ( δ ) ( u i , u j ) = e x p ( d Q 2 ( u i , u j ) 2 δ 2 ) .
Then
R Q G ( δ ) ( u i , u j ) R P G ( δ ) ( u i , u j ) ( 1 i , j n ) .
So
R Q G ( δ ) R P G ( δ ) .
Thus, by Theorem 3,
S δ ( Q ) S δ ( P ) .
 □
Corollary 3.
Let ( U , A ) be a SIS. Given 0 < δ 1 δ 2 1 and P Q A . Then
S δ 1 ( Q ) S δ 2 ( Q ) S δ 2 ( P ) , S δ 1 ( Q ) S δ 1 ( P ) S δ 2 ( P ) .
Proof. 
This holds by Theorem 4. □

6. Measuring Uncertainty of a SIS

In this section, some tools for evaluating uncertainty of a SIS are proposed.

6.1. Granulation Measures for a SIS

Definition 13.
Let ( U , A ) be a SIS. Suppose that M δ : 2 A ( , + ) is a function. Given δ ( 0 , 1 ] . Then M δ is referred to as an information granulation function in ( U , A ) with respect to δ, if M δ satisfies the following conditions:
(1) 
Non-negativity: P A , M δ ( P ) 0 ;
(2) 
Invariability: P , Q A , if S δ ( P ) = S δ ( Q ) , then M δ ( P ) = M δ ( Q ) ;
(3) 
Monotonicity: P , Q A , if S δ ( P ) S δ ( Q ) , then M δ ( P ) < M δ ( Q ) .
Here, M δ ( P ) is referred to as δ-information granulation of ( U , P ) .
Similar to Definition 5 in [48], the definition of δ -information granulation of a SIS is given in the following.
Definition 14.
Let ( U , A ) be a SIS. Given δ ( 0 , 1 ] . Then P A , δ-information granulation of ( U , P ) is addressed as
G δ ( P ) = 1 n i = 1 n 1 n | S R P G ( δ ) ( u i ) | .
Example 6. 
(Continued from Example 4)
G δ ( A ) = 1 10 i = 1 n 1 10 | S R A G ( δ ) ( u i ) | = 37.12 100 0.3712 .
Proposition 3.
Suppose that ( U , A ) is a SIS. Then P A and δ ( 0 , 1 ] ,
1 n G δ ( P ) 1 .
Moreover, if R P G ( δ ) = ω , then G δ achieves 1 n ; if R P G ( δ ) = , then G δ achieves 1.
Proof. 
Since i , 1 | R P G ( δ ) ( u i ) | n , n i = 1 n | R P G ( δ ) ( u i ) | n 2 . By Definition 14,
1 n G δ ( P ) 1 .
If R P G ( δ ) = , then i , | R P G ( δ ) ( u i ) | = 1 . So G δ ( P ) = 1 n .
If R P G ( δ ) = ω , then i , | R P G ( δ ) ( u i ) | = n . So G δ ( P ) = 1 . □
Proposition 4.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . Then
(1) 
If S δ 1 ( P ) S δ 2 ( Q ) , then G δ 1 ( P ) G δ 2 ( Q ) ;
(2) 
If S δ 1 ( P ) S δ 2 ( Q ) , then G δ 1 ( P ) < G δ 2 ( Q ) .
Proof. 
(1) Since S δ 1 ( P ) S δ 2 ( Q ) , i , we have S R P G ( δ ) ( u i ) S R Q G ( δ ) ( u i ) . Then | S R P G ( δ ) ( u i ) | | S R Q G ( δ ) ( u i ) | . By Definition 14,
G δ ( P ) = 1 n i = 1 n 1 n | S R P G ( δ ) ( u i ) | ,
G δ ( Q ) = 1 n i = 1 n 1 n | S R Q G ( δ ) ( u i ) | .
Thus
G δ 1 ( P ) G δ 2 ( Q ) .
(2) Since S δ 1 ( P ) S δ 2 ( Q ) , we have S δ 1 ( P ) S δ 2 ( Q ) and S δ 1 ( P ) S δ 2 ( Q ) .
Then, i , S R P G ( δ 1 ) ( u i ) S R Q G ( δ 2 ) ( u i ) and j , S R P G ( δ 1 ) ( u j ) S R Q G ( δ 2 ) ( u j ) .
So, i , | S R P G ( δ 1 ) ( u i ) | | S R Q G ( δ 2 ) ( u i ) | and j , | S R P G ( δ 1 ) ( u j ) | < | S R Q G ( δ 2 ) ( u j ) | .
Hence G δ 1 ( P ) < G δ 2 ( Q ) . □
Proposition 5.
Let ( U , A ) be a SIS.
(1) 
If 0 < δ 1 δ 2 1 , then P A , G δ 1 ( P ) G δ 2 ( P ) .
(2) 
If P Q A , then δ ( 0 , 1 ] , G δ ( Q ) G δ ( P ) .
Proof. 
This holds by Theorem 4 and Proposition 4(1). □
Example 7.
Let δ 1 2 = 0 . 6 , δ 2 2 = 0 . 8 . Then
R A G ( δ 1 ) =
1.0000 0.1304 0.4768 0.2072 0.2735 0.1189 0.6294 0.0900 0.3001 0.2072 0.1304 1.0000 0.1431 0.2735 0.2711 0.3292 0.2493 0.3292 0.0900 0.3001 0.4768 0.1431 1.0000 0.0900 0.2072 0.1304 0.6294 0.0900 0.1304 0.0900 0.2072 0.2735 0.0900 1.0000 0.0900 0.1304 0.1431 0.4346 0.1889 0.3962 0.2735 0.2711 0.2072 0.0900 1.0000 0.1431 0.4768 0.0900 0.2072 0.0748 0.1189 0.3292 0.1304 0.1304 0.1431 1.0000 0.1189 0.0821 0.1304 0.2072 0.6294 0.2493 0.6294 0.1431 0.4768 0.1189 1.0000 0.1431 0.2072 0.0900 0.0900 0.3292 0.0900 0.4346 0.0900 0.0821 0.1431 1.0000 0.0821 0.1722 0.3001 0.0900 0.1304 0.1889 0.2072 0.1304 0.2072 0.0821 1.0000 0.1304 0.2072 0.3001 0.0900 0.3962 0.0748 0.2072 0.0900 0.1722 0.1304 1.0000 ,
R A G ( δ 2 ) =
1.0000 0.2170 0.5737 0.3071 0.3782 0.2025 0.7066 0.1644 0.4055 0.3071 0.2170 1.0000 0.2326 0.3782 0.3757 0.4346 0.3529 0.4346 0.1644 0.4055 0.5737 0.2326 1.0000 0.1644 0.3071 0.2170 0.7066 0.1644 0.2170 0.1644 0.3071 0.3782 0.1644 1.0000 0.1644 0.2170 0.2326 0.5353 0.2865 0.4994 0.3782 0.3757 0.3071 0.1644 1.0000 0.2326 0.5737 0.1644 0.3071 0.1431 0.2025 0.4346 0.2170 0.2170 0.2326 1.0000 0.2025 0.1534 0.2170 0.3071 0.7066 0.3529 0.7066 0.2326 0.5737 0.2025 1.0000 0.2326 0.3071 0.1644 0.1644 0.4346 0.1644 0.5353 0.1644 0.1534 0.2326 1.0000 0.1534 0.2673 0.4055 0.1644 0.2170 0.2865 0.3071 0.2170 0.3071 0.1534 1.0000 0.2170 0.3071 0.4055 0.1644 0.4994 0.1431 0.3071 0.1644 0.2673 0.2170 1.0000 .
We have
G δ 1 ( A ) 0.2905 ,
G δ 2 ( A ) 0.3712 .
Thus
G δ 1 ( A ) < G δ 2 ( A ) .
Corollary 4.
Let ( U , A ) be a SIS. Given 0 < δ 1 δ 2 1 and P Q A . Then
G δ 1 ( Q ) G δ 2 ( Q ) G δ 2 ( P ) , G δ 1 ( Q ) G δ 1 ( P ) G δ 2 ( P ) .
Proof. 
The result is a consequence of Proposition 5. □
Theorem 5.
G δ is an information granulation function.
Proof. 
This holds by Definition 14 and Proposition 4. □

6.2. Entropy Measures for a SIS

Similar to Definition 8 in [61], we have the following definition.
Definition 15.
Suppose that ( U , A ) is a SIS. Then P A , δ-information entropy of ( U , P ) is addressed as
H δ ( P ) = i = 1 n ( 1 n log 2 | S R P G ( δ ) ( u i ) | n ) .
Example 8. 
(Continued from Example 4)
H δ ( A ) = i = 1 10 ( 1 10 log 2 | S R A G ( δ ) ( u i ) | 10 ) = 13.1421 10 1.3142 .
Theorem 6.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . Then
(1) If S δ 1 ( P ) S δ 2 ( Q ) , then H δ 2 ( Q ) H δ 1 ( P ) ;
(2) If S δ 1 ( P ) S δ 2 ( Q ) , then H δ 2 ( Q ) < H δ 1 ( P ) .
Proof. 
(1) Obviously.
(2) Please note that S δ 1 ( P ) S δ 2 ( Q ) . Then by Proposition 4, we have i , 1 | S R P G ( δ 1 ) ( u i ) | | S R Q G ( δ 2 ) ( u i ) | and j , 1 | S R P G ( δ 1 ) ( u j ) | < | S R Q G ( δ 2 ) ( u j ) | .
Then i ,
log 2 | S R P G ( δ 1 ) ( u i ) | n = log 2 n | S R P G ( δ 1 ) ( u i ) | log 2 n | S R Q G ( δ 2 ) ( u i ) | = log 2 | S R Q G ( δ 2 ) ( u i ) | n ,
and j ,
log 2 | S R P G ( δ 1 ) ( u j ) | n = log 2 n | S R P G ( δ 1 ) ( u j ) | > log 2 n | S R Q G ( δ 2 ) ( u j ) | = log 2 | S R Q G ( δ 2 ) ( u j ) | n .
Hence H δ 2 ( Q ) < H δ 1 ( P ) . □
Proposition 6.
Let ( U , A ) be a SIS.
(1) 
If 0 < δ 1 δ 2 1 , then P A , H δ 2 ( P ) H δ 1 ( P ) ;
(2) 
If P Q A , then δ ( 0 , 1 ] , H δ ( P ) H δ ( Q ) .
Proof. 
It is proved by Theorems 4 and 6(1). □
Corollary 5.
Let ( U , A ) be a SIS. Given 0 < δ 1 δ 2 1 and P Q A . Then
H δ 2 ( P ) H δ 2 ( Q ) H δ 1 ( Q ) ,
H δ 2 ( P ) H δ 1 ( P ) H δ 1 ( Q ) .
Proof. 
The result is a consequence of Proposition 6. □
Rough entropy, proposed by Yao [25], evaluates granularity of a given partition. Similar to Definition 10 in [61], we have the following definition.
Definition 16.
Let ( U , A ) be a SIS. Given δ ( 0 , 1 ] and P A . δ-rough entropy of ( U , P ) is addressed as
( E r ) δ ( P ) = i = 1 n 1 n log 2 1 | S R P G ( δ ) ( u i ) | .
Example 9. 
(Continued from Example 4)
( E r ) δ ( A ) = i = 1 10 1 10 log 2 1 | S R A G ( δ ) ( u i ) | = 188.34 10 1.8834 .
Proposition 7.
Let ( U , A ) be a SIS. Given δ ( 0 , 1 ] and P A . Then
0 ( E r ) δ ( P ) log 2 n .
Moreover, if R P G ( δ ) = , then ( E r ) δ achieves 0; if R P G ( δ ) = ω , then ( E r ) δ achieves log 2 n .
Proof. 
Please note that R P G ( δ ) is a fuzzy equivalence relation on U. Then i ,
R P G ( δ ) ( u i ) ( u i ) = 1 .
So i , 1 | S R P G ( δ ) ( u i ) | n ,
0 log 2 1 | S R P G ( δ ) ( u i ) | = log 2 | S R P G ( δ ) ( u i ) | log 2 n .
Then 0 i = 1 n log 2 1 | S R P G ( δ ) ( u i ) | n log 2 n .
By Definition 16,
0 ( E r ) δ ( P ) log 2 n .
If R P G ( δ ) = , then i , | S R P G ( δ ) ( u i ) | = 1 . So ( E r ) δ ( P ) = 0 .
If R P G ( δ ) = ω , then i , | S R P G ( δ ) ( u i ) | = n . So ( E r ) δ ( P ) = log 2 n . □
Proposition 8.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . Then
(1) 
If S δ 1 ( P ) S δ 2 ( Q ) , then ( E r ) δ 1 ( P ) ( E r ) δ 2 ( Q ) ;
(2)
If S δ 1 ( P ) S δ 2 ( Q ) , then ( E r ) δ 1 ( P ) < ( E r ) δ 2 ( Q ) .
Proof. 
(1) Obviously.
(2) Please note that S δ 1 ( P ) S δ 2 ( Q ) . The by the proof of Proposition 4(2), we have i , 1 | S R P G ( δ 1 ) ( u i ) | | S R Q G ( δ 2 ) ( u i ) | and j , 1 | S R P G ( δ 1 ) ( u j ) | < | S R Q G ( δ 2 ) ( u j ) | .
Then i ,
log 2 1 | S R P G ( δ 1 ) ( u i ) | = log 2 | S R P G ( δ 1 ) ( u i ) | log 2 | S R Q G ( δ 2 ) ( u i ) | = log 2 1 | S R Q G ( δ 2 ) ( u i ) |
and j ,
log 2 1 | S R P G ( δ 1 ) ( u j ) | = l o g 2 | S R P G ( δ 1 ) ( u j ) | < log 2 | S R Q G ( δ 2 ) ( u j ) | = log 2 1 | S R Q G ( δ 2 ) ( u j ) | .
Hence ( E r ) δ 1 ( P ) < ( E r ) δ 2 ( Q ) . □
Proposition 9.
Let ( U , A ) be a SIS.
(1) 
If 0 < δ 1 δ 2 1 , then P A , ( E r ) δ 1 ( P ) ( E r ) δ 2 ( P ) ;
(2) 
If P Q A , then δ ( 0 , 1 ] , ( E r ) δ ( Q ) ( E r ) δ ( P ) .
Proof. 
It is easy to prove by Theorems 4 and 8(1). □
From Theorem 8 and Proposition 9, we come to the conclusion that the more certain δ -information structure is, the smaller δ -rough entropy value becomes.
Corollary 6.
Let ( U , A ) be a SIS. Given 0 < δ 1 δ 2 1 and P Q A . Then
( E r ) δ 1 ( Q ) ( E r ) δ 2 ( Q ) ( E r ) δ 2 ( P ) ,
( E r ) δ 1 ( Q ) ( E r ) δ 1 ( P ) ( E r ) δ 2 ( P )
Proof. 
The result is a consequence of Proposition 9. □
Theorem 7. 
( E r ) δ is an information granulation function.
Proof. 
This holds by Definition 16 and Theorem 8. □
Theorem 8.
Assume that ( U , A ) is a SIS. Given δ ( 0 , 1 ] and P A . Then
( E r ) δ ( P ) + H δ ( P ) = l o g 2 n .
Proof. 
( E r ) δ ( P ) + H δ ( P ) = 1 n i = 1 n ( log 2 1 | S R P G ( δ ) ( u i ) | + log 2 | S R P G ( δ ) ( u i ) | n ) = 1 n i = 1 n log 2 1 n = log 2 n .
 □
Corollary 7.
Let ( U , A ) be a SIS. Given δ ( 0 , 1 ] and P A . Then
0 H δ ( P ) l o g 2 n .
Proof. 
By Proposition 7, 0 ( E r ) δ ( P ) log 2 n .
By Theorem 8, H δ ( P ) = l o g 2 n ( E r ) δ ( P ) .
Thus 0 H δ ( P ) l o g 2 n . □

6.3. Information Amounts in a SIS

Similar to Definition 10 in [61], the following definition is presented.
Definition 17.
Let ( U , A ) be a set-valued information system. Given δ ( 0 , 1 ] and P A . δ-information amount of ( U , P ) is addressed as
E δ ( P ) = i = 1 n 1 n ( 1 | S R P G ( δ ) ( u i ) | n ) .
Example 10. 
(Continued from Example 4)
E δ ( A ) = i = 1 10 1 10 ( 1 | S R A G ( δ ) ( u i ) | 10 ) = 62.88 10 0.6288 .
Theorem 9.
Let ( U , A ) be a SIS. Given δ 1 , δ 2 ( 0 , 1 ] and P , Q A . Then.
(1) 
If S δ 1 ( P ) S δ 2 ( Q ) , then E δ 2 ( Q ) E δ 1 ( P ) ;
(2) 
If S δ 1 ( P ) S δ 2 ( Q ) , then E δ 2 ( Q ) < E δ 1 ( P ) .
Proof. 
(1) Obviously.
(2) Please note that S δ 1 ( P ) S δ 2 ( Q ) . Then by the proof of Proposition 4(2), we have i , 1 | S R P G ( δ 1 ) ( u i ) | | S R Q G ( δ 2 ) ( u i ) | and j ,
1 | S R P G ( δ 1 ) ( u j ) | < | S R Q G ( δ 2 ) ( u j ) | .
Hence E δ 2 ( Q ) < E δ 1 ( P ) . □
Proposition 10.
Let ( U , A ) be a SIS.
(1) 
If 0 < δ 1 δ 2 1 , then P A , E δ 2 ( P ) E δ 1 ( P ) ;
(2) 
If P Q A , then δ ( 0 , 1 ] , E δ ( P ) E δ ( Q ) .
From Theorem 9 and Proposition 10, it can be concluded that the more certain δ -information structure is, the bigger δ -information amount value becomes.
Corollary 8.
Let ( U , A ) be a SIS. Given 0 < δ 1 δ 2 1 and P Q A . Then
E δ 2 ( P ) E δ 2 ( Q ) E δ 1 ( Q ) ,
E δ 2 ( P ) E δ 1 ( P ) E δ 1 ( Q ) .
Proof. 
This holds by Proposition 10. □
Theorem 10.
Assume that ( U , A ) is a SIS. Given δ ( 0 , 1 ] and P A . Then
G δ ( P ) + E δ ( P ) = 1 .
Proof. 
G δ ( P ) + E δ ( P ) ) = 1 n 2 i = 1 n [ | S R P G ( δ ) ( u i ) | + ( n | S R P G ( δ ) ( u i ) | ) ] = 1 n 2 i = 1 n n = 1 .
 □
Corollary 9.
Let ( U , A ) be a SIS. Given δ ( 0 , 1 ] and P A . Then
0 E δ ( P ) 1 1 n .
Proof. 
By Proposition 3, 1 n G δ ( P ) 1 .
By Theorem 10, E δ ( P ) = 1 G δ ( P ) .
Thus 0 E δ ( P ) 1 1 n . □
Example 11. 
(Continued from Example 2)
Let ( U , P 1 ) , ( U , P 2 ) , ( U , P 3 ) and ( U , P 4 ) be four subsystems of ( U , A ) . Pick δ 2 = 0 . 1 , , 0.9 . The following results are obtained:
(1) 
If monotonicity is only considered, then δ-information granulation and δ-rough entropy are both monotonically increasing with the δ value growth, that means the uncertainty of four subsystems increase as the δ value increases. Meanwhile, δ-information amount and δ-information entropy are both monotonically decreasing with δ value growth, That means the uncertainty of four subsystem decreases as the δ value increases (see Figure 1, Figure 2, Figure 3 and Figure 4).
(2) 
If δ 2 = 0.8 , consider δ-information granulation and δ-rough entropy, G δ ( P 4 ) < G δ ( P 3 ) < G δ ( P 2 ) < G δ ( P 1 ) , ( E r ) δ ( P 4 ) < ( E r ) δ ( P 3 ) < ( E r ) δ ( P 2 ) < ( E r ) δ ( P 1 ) is got. That shows the larger the subsystem, the smaller the measured value. Pay attention to δ-information amount and δ-information entropy, we have E δ ( P 1 ) < E δ ( P 2 ) < E δ ( P 3 ) < E δ ( P 4 ) , H δ ( P 1 ) < H δ ( P 2 ) < H δ ( P 3 ) < H δ ( P 4 ) . That displays the measured value of the subsystem is larger than the smaller one (see Figure 5).

6.4. Effectiveness Analysis

In this subsection, we do effectiveness analysis from the angle of statistics.

6.4.1. Dispersion Analysis

Below, coefficient of variation is used to do effectiveness analysis.
Given a data set D = { d 1 , d 2 , , d n } . Its average value d ¯ = 1 n i = 1 n d i , its standard deviation σ ( X ) = 1 n i = 1 n ( d i d ¯ ) 2 , and its coefficient of variation
C V ( X ) = σ ( X ) d ¯ .
Example 12. 
(Continued from Example 4) Denote
X G δ = { G δ ( P 1 ) , G δ ( P 2 ) , G δ ( P 3 ) , G δ ( P 4 ) } ,
X E r δ = { E r δ ( P 1 ) , E r δ ( P 2 ) , E r δ ( P 3 ) , E r δ ( P 4 ) } ,
X H δ = { H δ ( P 1 ) , H δ ( P 2 ) , H δ ( P 3 ) , H δ ( P 4 ) } ,
X E δ = { E δ ( P 1 ) , E δ ( P 2 ) , E δ ( P 3 ) , E δ ( P 4 ) } .
Then
X G δ = { 0.7547 , 0.5938 , 0.4666 , 0.3712 } ,
X H δ = { 2.9111 , 2.5647 , 2.2162 , 1.8834 } ,
X E r δ = { 0.4108 , 0.7572 , 1.1058 , 1.4382 } ,
X E δ = { 0.2453 , 0.4062 , 0.5334 , 0.6288 } .
So C V ( X G δ ) = 0.1438 , C V ( X H δ ) = 0.3836 , C V ( X E r δ ) = 0.3837 , C V ( X E δ ) = 0.1438 (see Figure 6).
So
C V ( X G δ ) = C V ( X E δ ) < C V ( X H δ ) < C V ( X E r δ ) .
This means the dispersion degree of G δ and E δ are minimum.
From Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, the following results are obtained:
(1)
if monotonicity is only needed, then G, E r , H and E can evaluate uncertainty of a SIS.
(2)
if the dispersion degree is only considered, then E has better performance for measuring uncertainty of a SIS.

6.4.2. Association Analysis

Pearson correlation coefficient is used to assess the intensity of the linear correlation between data sets.
Given that X = { u 1 , , u n } and Y = { v 1 , , v n } are data sets. Pearson correlation coefficient between X and Y is defined as
r ( X , Y ) = i = 1 n ( u i u ¯ ) ( v i v ¯ ) i = 1 n ( u i u ¯ ) 2 i = 1 n ( v i v ¯ ) 2 ,
where u ¯ = 1 n i = 1 n u i , v ¯ = 1 n i = 1 n v i .
Example 13. 
(Continued from Example 4) We have the following results shown in Table 2.
From Table 2, the following conclusions are given, which is shown in Table 3, where “CPC”, “CNC”, “HPC” and “HNC” mean “completely positive correlation”, “completely negative correlation”, “high positive correlation” and “high negative correlation”, respectively.

7. Conclusions

In this article, information structures in a SIS have been described as set vectors. In light of this consideration, dependence between two information structures has been depicted. Properties of information structures have been provided. By using information structures, granularity and entropy measures for a SIS have been investigated. Moreover, the amount of information in a SIS has been also considered. In future work, three-way decision in a SIS will be studied.

Author Contributions

The authors discuss the results of this paper. J.H. designs the overall structure of this paper and improves the language; P.W. collects the data; Z.L. writes the paper.

Funding

This work is supported by High Level Innovation Team Program from Guangxi Higher Education Institutions of China (Document No. [2018] 35), Natural Science Foundationof Guangxi (2018GXNSFDA294003, 2018GXNSFDA281028, 2018JJA180014), Key Laboratory of Software Engineering in Guangxi University for Nationalities (2018-18XJSY-03) and Engineering Project of Undergraduate Teaching Reform of Higher Education in Guangxi (2017JGA179).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy logic equals computing with words. IEEE Trans. Fuzzy Syst. 1996, 4, 103–111. [Google Scholar] [CrossRef]
  2. Zadeh, L.A. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst. 1997, 90, 111–127. [Google Scholar] [CrossRef]
  3. Zadeh, L.A. Some reflections on soft computing, granular computing and their roles in the conception, design and utilization of information intelligent systems. Soft Comput. 1998, 2, 23–25. [Google Scholar] [CrossRef]
  4. Zadeh, L.A. A new direction in AI-Toward a computational theory of perceptions. AI Mag. 2001, 22, 73–84. [Google Scholar]
  5. Lin, T.Y. Granular computing on binary relations I: Data mining and neighborhood systems. In Rough Sets in Knowledge Discovery; Skowron, A., Polkowski, L., Eds.; Physica-Verlag: Heidelber, Germany, 1998; pp. 107–121. [Google Scholar]
  6. Lin, T.Y. Granular computing on binary relations II: Rough set representations and belief functions. In Rough Sets In Knowledge Discovery; Skowron, A., Polkowski, L., Eds.; Physica-Verlag: Heidelber, Germany, 1998; pp. 121–140. [Google Scholar]
  7. Lin, T.Y. Granular computing: Fuzzy logic and rough sets. In Computing with Words in Information Intelligent Systems; Zadeh, L.A., Kacprzyk, J., Eds.; Physica-Verlag: Heidelber, Germany, 1999; pp. 183–200. [Google Scholar]
  8. Yao, Y.Y. Information granulation and rough set approximation. Int. J. Intell. Syst. 2001, 16, 87–104. [Google Scholar] [CrossRef]
  9. Yao, Y.Y. Probabilistic approaches to rough sets. Expert Syst. 2003, 20, 287–297. [Google Scholar] [CrossRef]
  10. Yao, Y.Y. Perspectives of Granular computing. In Proceedings of the 2005 IEEE International Conference on Granular Computing, Beijing, China, 25–27 July 2005; Volume 1, pp. 85–90. [Google Scholar]
  11. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  12. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  13. Ma, J.; Zhang, W.; Leung, Y.; Song, X. Granular computing and dual Galois connection. Inf. Sci. 2007, 177, 5365–5377. [Google Scholar] [CrossRef]
  14. Wu, W.Z.; Leung, Y.; Mi, J. Granular computing and knowledge reduction in formal contexts. IEEE Trans. Knowl. Data Eng. 2009, 21, 1461–1474. [Google Scholar]
  15. Zhang, L.; Zhang, B. Theory and Application of Problem Solving-Theory and Application of Granular Computing in Quotient Spaces; Tsinghua University Publishers: Beijing, China, 2007. [Google Scholar]
  16. Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991. [Google Scholar]
  17. Pawlak, Z.; Skowron, A. Rough sets and boolean reasoning. Inf. Sci. 2007, 177, 41–73. [Google Scholar] [CrossRef]
  18. Pawlak, Z.; Skowron, A. Rough sets: Some extensions. Inf. Sci. 2007, 177, 28–40. [Google Scholar] [CrossRef]
  19. Pawlak, Z.; Skowron, A. Rudiments of rough sets. Inf. Sci. 2007, 177, 3–27. [Google Scholar] [CrossRef]
  20. Cornelis, C.; Jensen, R.; Martin, G.H.; Slezak, D. Attribute selection with fuzzy decision reducts. Inf. Sci. 2010, 180, 209–224. [Google Scholar] [CrossRef]
  21. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  22. Swiniarski, R.W.; Skowron, A. Rough set methods in feature selection and recognition. Pattern Recognit. Lett. 2003, 24, 833–849. [Google Scholar] [CrossRef]
  23. Slowinski, R.; Vanderpooten, D. A generalized definition of rough approximations based on setilarity. IEEE Trans. Snowledge Data Eng. 2000, 12, 331–336. [Google Scholar] [CrossRef]
  24. Greco, S.; Inuiguchi, M.; Slowinski, R. Fuzzy rough sets and multiplepremise gradual decision rules. Int. J. Approx. Reason. 2006, 41, 179–211. [Google Scholar] [CrossRef]
  25. Yao, Y.Y. Relational interpretations of neighborhood operators and rough set approximation operators. Inf. Sci. 1998, 111, 239–259. [Google Scholar] [CrossRef]
  26. Blaszczynski, J.; Slowinski, R.; Szelag, M. Sequential covering rule induction algorithm for variable consistency rough set approaches. Inf. Sci. 2011, 181, 987–1002. [Google Scholar] [CrossRef]
  27. Kryszkiewicz, M. Rules in incomplete information systems. Inf. Sci. 1999, 113, 271–292. [Google Scholar] [CrossRef]
  28. Mi, J.S.; Leung, Y.; Wu, W.Z. An uncertainty measure in partition-based fuzzy rough sets. Int. J. Gen. Syst. 2005, 34, 77–90. [Google Scholar] [CrossRef]
  29. Wierman, M.J. Measuring uncertainty in rough set theory. Int. J. Gen. Syst. 1999, 28, 283–297. [Google Scholar] [CrossRef]
  30. Hu, Q.H.; Pedrycz, W.; Yu, D.R.; Lang, J. Selecting discrete and continuous features based on neighborhood decision error minimization. IEEE Trans. Syst. Man Cybern. Part B 2010, 40, 137–150. [Google Scholar]
  31. Jensen, R.; Shen, Q. Semantics-preserving dimensionality reduction: Rough and fuzzy rough based approaches. IEEE Trans. Snowledge Data Eng. 2004, 16, 1457–1471. [Google Scholar] [CrossRef]
  32. Jensen, R.; Shen, Q. New approaches to fuzzy-rough feature selection. IEEE Trans. Fuzzy Syst. 2009, 17, 824–838. [Google Scholar] [CrossRef]
  33. Qian, Y.H.; Liang, J.Y.; Pedrycz, W.; Dang, C.Y. An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef]
  34. Thangavel, S.; Pethalakshmi, A. Dimensionality reduction based on rough set theory: A review. Appl. Soft Comput. 2009, 9, 1–12. [Google Scholar] [CrossRef]
  35. Xie, S.D.; Wang, Y.X. Construction of tree network with limited delivery latency in homogeneous wireless sensor networks. Wirel. Pers. Commun. 2014, 78, 231–246. [Google Scholar] [CrossRef]
  36. Cament, L.A.; Castillo, L.E.; Perez, J.P.; Galdames, F.J.; Perez, C.A. Fusion of local normalization and Gabor entropy weighted features for face identification. Pattern Recognit 2014, 47, 568–577. [Google Scholar] [CrossRef]
  37. Gu, B.; Sheng, V.S.; Wang, Z.J.; Ho, D.; Osman, S. Incremental learning for v-support vector regression. Neural Netw. 2015, 67, 140–150. [Google Scholar] [CrossRef] [PubMed]
  38. Navarrete, J.; Viejo, D.; Cazorla, M. Color smoothing for RGB-D data using entropy information. Appl. Soft Comput. 2016, 46, 361–380. [Google Scholar] [CrossRef]
  39. Hempelmann, C.F.; Sakoglu, U.; Gurupur, V.P.; Jampana, S. An entropy-based evaluation method for knowledge bases of medical information systems. Expert Syst. Appl. 2016, 46, 262–273. [Google Scholar] [CrossRef]
  40. Delgado, A.; Romero, I. Environmental conflict analysis using an integrated grey clustering and entropy-weight method: A case study of a mining project in Peru. Environ. Model. Softw. 2016, 77, 108–121. [Google Scholar] [CrossRef]
  41. Bianucci, D.; Cattaneo, G. Information entropy and granulation co-entropy of partitions and coverings: A summary. Trans. Rough Sets 2009, 10, 15–66. [Google Scholar]
  42. Bianucci, D.; Cattaneo, G.; Ciucci, D. Entropies and cocentropies of coverings with application to incomplete information systems. Fundam. Informaticae 2007, 75, 77–105. [Google Scholar]
  43. Beaubouef, T.; Petry, F.E.; Arora, G. Information-theoretic measures of uncertainty for rough sets and rough relational databases. Inf. Sci. 1998, 109, 185–195. [Google Scholar] [CrossRef]
  44. Liang, J.Y.; Shi, Z.Z. The information entropy, rough entropy and knowledge granulation in rough set theory. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2004, 12, 37–46. [Google Scholar] [CrossRef]
  45. Liang, J.Y.; Shi, Z.Z.; Li, D.Y.; Wierman, M.J. The information entropy, rough entropy and knowledge granulation in incomplete information systems. Int. J. Gen. Syst. 2006, 35, 641–654. [Google Scholar] [CrossRef]
  46. Dai, J.H.; Tian, H.W. Entropy measures and granularity measures for set-valued information systems. Inf. Sci. 2013, 240, 72–82. [Google Scholar] [CrossRef]
  47. Qian, Y.H.; Liang, J.Y.; Wu, W.Z.; Dang, C.Y. Knowledge structure, knowledge granulation and knowledge distance in a knowledge base. Int. J. Approx. Reason. 2009, 50, 174–188. [Google Scholar] [CrossRef]
  48. Qian, Y.H.; Liang, J.Y.; Wu, W.Z.; Dang, C.Y. Information granularity in fuzzy binary GrC model. IEEE Trans. Fuzzy Syst. 2011, 19, 253–264. [Google Scholar] [CrossRef]
  49. Xu, W.H.; Zhang, X.Y.; Zhang, W.X. Knowledge granulation, knowledge entropy and knowledge uncertainty measure in ordered information systems. Appl. Soft Comput. 2009, 9, 1244–1251. [Google Scholar]
  50. Dai, J.H.; Wei, B.J.; Zhang, X.H.; Zhang, Q.L. Uncertainty measurement for incomplete interval-valued information systems based on α-weak similarity. Knowl.-Based Syst. 2017, 136, 159–171. [Google Scholar] [CrossRef]
  51. Xie, N.X.; Liu, M.; Li, Z.W.; Zhang, G.Q. New measures of uncertainty for an interval-valued information system. Inf. Sci. 2019, 470, 156–174. [Google Scholar] [CrossRef]
  52. Zhang, G.Q.; Li, Z.W.; Wu, W.Z.; Liu, X.F.; Xie, N.X. Information structures and uncertainty measures in a fully fuzzy information system. Int. J. Approx. Reason. 2018, 101, 119–149. [Google Scholar] [CrossRef]
  53. Moser, B. On representing and generating kernels by fuzzy equivalence relations. J. Mach. Learn. Res. 2006, 7, 2603–2630. [Google Scholar]
  54. Zeng, A.P.; Li, T.R.; Liu, D.; Zhang, J.B.; Chen, H.M. A fuzzy rough set approach for incremental feature selection on hybrid information systems. Fuzzy Sets Syst. 2015, 258, 39–60. [Google Scholar] [CrossRef]
  55. Moser, B. On the T-transitivity of kernels. Fuzzy Sets Syst. 2006, 157, 1787–1796. [Google Scholar] [CrossRef]
  56. Yao, Y.Y.; Noroozi, N. A unified framework for set-based computations. In Proceedings of the 3rd International Workshop on Rough Sets and Soft Computing, San Jose, CA, USA, 10–12 November 1994; pp. 10–12. [Google Scholar]
  57. Shawe-Tayor, J.; Cristianini, N. Kernel Methods for Patternn Analysis; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  58. Yang, S.; Yan, S.; Zhang, C.; Tang, X. Bilinear analysis for kernel selection and nonlinear feature extraction. IEEE Trans. Neural Netw. 2007, 18, 1442–1452. [Google Scholar] [CrossRef]
  59. Hu, Q.H.; Xie, Z.X.; Yu, D.R. Hybrid attribute reduction based on a novel fuzzy-rough model and information granulation. Pattern Recognit. 2007, 40, 3509–3521. [Google Scholar] [CrossRef]
  60. Hu, Q.H.; Zhang, L.; Chen, D.G.; Pedrycz, W.; Yu, D.R. Gaussian kernel based fuzzy rough sets: Model, uncertainty measures and applications. Int. J. Approx. Reason. 2010, 51, 453–471. [Google Scholar] [CrossRef]
  61. Liang, J.Y.; Qu, K.S. Information measures of roughness of knowledge and rough sets for information systems. J. Syst. Sci. Syst. Eng. 2002, 10, 95–103. [Google Scholar]
Figure 1. Uncertainty measurement of a SIS.
Figure 1. Uncertainty measurement of a SIS.
Symmetry 11 00199 g001
Figure 2. Uncertainty measurement of a SIS.
Figure 2. Uncertainty measurement of a SIS.
Symmetry 11 00199 g002
Figure 3. Uncertainty measurement of a SIS.
Figure 3. Uncertainty measurement of a SIS.
Symmetry 11 00199 g003
Figure 4. Uncertainty measurement of a SIS.
Figure 4. Uncertainty measurement of a SIS.
Symmetry 11 00199 g004
Figure 5. Uncertainty measures of subsystems with the changeless δ.
Figure 5. Uncertainty measures of subsystems with the changeless δ.
Symmetry 11 00199 g005
Figure 6. C V -values for measuring uncertainty of the subsystems.
Figure 6. C V -values for measuring uncertainty of the subsystems.
Symmetry 11 00199 g006
Table 1. A SIS ( U , A ) .
Table 1. A SIS ( U , A ) .
Price ( a 1 )Mileage ( a 2 )Size ( a 3 )Max-Speed ( a 4 )
u 1 {high}{high}{full}{high,mid,low}
u 2 {mid,low}{high,mid,low}{compact}{high,mid,low}
u 3 {high,low}{high}{full}{high}
u 4 {high}{high,low}{compact}{low}
u 5 {mid}{high,mid}{full}{high,low}
u 6 {high,mid}{mid}{compact}{high}
u 7 {high,mid,low}{high}{full}{high,low}
u 8 {low}{high,low}{compact}{low}
u 9 {high}{mid}{full}{low}
u 10 {high}{high,mid,low}{compact}{mid}
Table 2. r-values of sixteen pairs of measure values sets for measuring uncertainty the subsystem.
Table 2. r-values of sixteen pairs of measure values sets for measuring uncertainty the subsystem.
r X G δ X H δ X E r δ X E δ
X G δ 1
X H δ −0.994471
X E r δ 0.99444−11
X E δ −10.994460.994461
Table 3. The correlation between two measures.
Table 3. The correlation between two measures.
GE E r H
GCPC
EHNCCPC
E r HPCCNCCPC
HCNCHPCHPCCPC

Share and Cite

MDPI and ACS Style

He, J.; Wang, P.; Li, Z. Uncertainty Measurement for a Set-Valued Information System: Gaussian Kernel Method. Symmetry 2019, 11, 199. https://doi.org/10.3390/sym11020199

AMA Style

He J, Wang P, Li Z. Uncertainty Measurement for a Set-Valued Information System: Gaussian Kernel Method. Symmetry. 2019; 11(2):199. https://doi.org/10.3390/sym11020199

Chicago/Turabian Style

He, Jiali, Pei Wang, and Zhaowen Li. 2019. "Uncertainty Measurement for a Set-Valued Information System: Gaussian Kernel Method" Symmetry 11, no. 2: 199. https://doi.org/10.3390/sym11020199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop