Next Article in Journal
A Novel Edge Detection Method Based on the Regularized Laplacian Operation
Previous Article in Journal
Radial Symmetry for Weak Positive Solutions of Fractional Laplacian with a Singular Nonlinearity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lower Approximation Reduction Based on Discernibility Information Tree in Inconsistent Ordered Decision Information Systems

1
School of Mathematics and Statistics, Chongqing University of Technology, Chongqing 400054, China
2
School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(12), 696; https://doi.org/10.3390/sym10120696
Submission received: 5 November 2018 / Revised: 17 November 2018 / Accepted: 22 November 2018 / Published: 3 December 2018

Abstract

:
Attribute reduction is an important topic in the research of rough set theory, and it has been widely used in many aspects. Reduction based on an identifiable matrix is a common method, but a lot of space is occupied by repetitive and redundant identifiable attribute sets. Therefore, a new method for attribute reduction is proposed, which compresses and stores the identifiable attribute set by a discernibility information tree. In this paper, the discernibility information tree based on a lower approximation identifiable matrix is constructed in an inconsistent decision information system under dominance relations. Then, combining the lower approximation function with the discernibility information tree, a complete algorithm of lower approximation reduction based on the discernibility information tree is established. Finally, the rationality and correctness of this method are verified by an example.

1. Introduction

Rough set theory [1], as a new mathematical tool, is mainly used to deal with imprecise, inconsistent, and incomplete information. Because the purpose and starting point of rough set theory is to analyze and deduce data directly, discover hidden knowledge and reveal potential laws, it is a method of data mining or knowledge discovery [2,3,4,5]. Compared with other methods dealing with uncertainties, such as data mining based on evidence theory, the most significant difference is that it does not require any prior knowledge beyond the data set that the problem needs to deal with. In recent years, great advances have been made in the popularization and application of rough set theory. Based on formal concept analysis, Xu et al. [6] proposed two operators between objects and attributes, established a new cognitive system model, and provided a method for arbitrary transformation of information granules in a cognitive system. Then, Xu and Li [7] established a model and mechanism of a two-way learning system in a fuzzy dataset based on information granules. Medina and Ojeda-Aciego [8] proposed that multi-adjoint frames can be applied to general T-concept lattices and then proved that the common information of two concept lattices can be regarded as a sublattice of the Cartesian product of two concept lattices. In the field of data mining, Kumar et al. [9] combined the fuzzy C-means algorithm, based on fuzzy membership, with the improved artificial bee colony algorithm and proposed a hybrid algorithm to overcome the shortcomings of local optimum in the original clustering problem.
Rough set and fuzzy set are closely related to each other and both can deal with imprecision, vagueness, and uncertainty in incomplete data analysis. The application of fuzzy sets is also very extensive. Pozna et al. [10] proposed a new data symbol representation framework, gave the related primitive operators, and discussed the applications related to the modeling of a fuzzy reasoning system. Jankowski et al. [11] analyzed the impact of online advertising on advertising effect and user experience and proposed a balanced method of advertising resource development based on fuzzy multi-objective modeling. Not only in these aspects, but also in other areas, rough set theory plays an important role, such as in pattern recognition, machine learning, intelligent control and other fields [12,13,14,15,16,17].
Attribute reduction [18,19,20,21,22] is one of the core contents of knowledge discovery. It describes how to delete unnecessary knowledge in an information system, to reduce the quantity of information to be processed in data mining and improve the efficiency of data mining. Since Skowron [23] proposed an attribute reduction algorithm based on an identifiable matrix, the intuitive simplicity of identifiable matrix has attracted the attention of many scholars. Liang et al. [24] extended information entropy, which can effectively measure the fuzziness of rough sets. Cao et al. [25] measured the importance of attributes in decision tables from the definition of information entropy. Based on this, an effective decision table reduction algorithm was proposed. Hu et al. [26] established a rough set complete reduction algorithm based on concept lattice model and investigated a decision table attribute reduction method based on concept lattice. Jiang and Wang [27] studied a rough set attribute reduction algorithm based on discernibility matrix. The influence of different discernibility matrices on the efficiency of attribute reduction is analyzed. Then a new discernibility matrix is defined to reduce the number of elements in the matrix to improve the efficiency. Yang [28] put forward a new discernibility matrix storage method (C-Tree) to realize the compression storage of a discernibility matrix. In recent references, there are many other attribute reduction methods [29,30,31,32,33,34,35,36,37]. Although the appearance of duplicate elements in a discernibility matrix is eliminated and the compression storage of a discernibility matrix is realized, the method is useless for redundant parent set elements. Hence, Jiang [38] investigated an attribute reduction algorithm based on a discernibility information tree. This algorithm cannot only delete duplicate elements in the discernibility matrix, but also eliminate the influence of parent set elements in most cases and realize compression and storage of the discernibility matrix. Because the above research is carried out under the equivalence relation, and the number of elements in the discernibility matrix under the dominance relation is two times that under the equivalence relation, the advantage of the discernibility information tree compression and storage in the ordered information system is more obvious. Based on this advantage, we generalize the discernibility information tree of equivalence relation to dominance relation. Each attribute set is regarded as a path mapped to the discernibility information tree in the identifiable matrix. The same identifiable attribute set is stored on the same path. Compared with the identifiable matrix, the discernibility information tree eliminates the occupancy of the space of the repeated attribute set. For the attribute sets with the same prefix, take { a , b , c , d } and { a , b } for example, when the two paths occur simultaneously, the path of the attribute set with fewer elements { a , b } is selected to replace the path of { a , b , c , d } , and the path of { a , b } is mapped onto the discernibility information tree, which reduces the space occupation of redundant elements. Finally, based on this tree, the algorithm of approximation reduction is established. This is the motivation of this article, behind the research presented here.
The paper is organized as follows. Related concepts and definitions are reviewed briefly in Section 2, the basic knowledge of lower approximation reduction and discernibility information trees are briefly reviewed. In Section 3, the discernibility information tree based on the lower approximation identifiable matrix is constructed by combining the lower approximation identifiable attribute set with the ordered tree, and the corresponding algorithm is also provided. Then, a complete lower approximation reduction algorithm is established based on a discernibility information tree. In Section 4, to illustrate the effectiveness of the proposed method in this paper, a detailed example is presented to verify and explain the lower approximation reduction based on discernibility information system in the inconsistent decision information system. Finally, Section 5 covers some conclusions.

2. Preliminary

In this section, we will review the basic knowledge of this article such as inconsistent ordered information system with decision, lower approximation identifiable matrix and lower approximation reduction based on dominance relation. At the same time, we will introduce the related concepts of the discernibility information tree needed in the continuation of this paper.

2.1. Inconsistent Ordered Decision Information System (IODIS)

A decision information system is a quintuple D I S = ( U , A T D T , F , G ) , where U is a non-empty finite universe, A T is a finite non-empty set of condition attributes, D T is a finite non-empty set of decision attributes and A T D T = , F = { f : U V a , a A T } are mapping sets of object attribute value, in which f : U × A T V a is a total function such that f ( x , a ) V a for each a A T , x U . In the meantime, G = { g : U V d , d D T } are mapping sets of object attribute value, in which g : U × D T V d is a total function such that g ( x , d ) V d for each d D T , x U .
Definition 1.
Let D I S = ( U , A T D T , F , G ) be a decision information system. If R A T R D T , then it is called an inconsistent decision information system ( I D I S ) , otherwise it is called a consistent decision information system.
In classical information systems of Pawlak approximation space, each condition attribute set and decision attribute set determines a binary indistinguishable relation, i.e., equivalence relation. However, in real life, many practical problems are based on dominance relations. We suppose that the domain of criterion a A T is preordered by the relation a . The statement x i a x j indicates that x i is at least as good as y with respect to criterion a. We say that A A T are criteria. Then x i A x j x i a x j ( a A ) . The dominance relation with respect to condition attribute set A T can be defined as
R A T = { ( x i , x j ) U × U | x j a x i , a A T } .
Based on dominance relation R A T , dominance classes can be defined as
[ x i ] A T = { x j U | ( x i , x j ) R A T } .
Similarly, the dominance relation and dominance classes with respect to decision attribute set D T can be defined as
R D T = { ( x i , x j ) U × U | x j d x i , d D T } ,
[ x i ] D T = { x j U | ( x i , x j ) R D T } .
Definition 2.
Let I D I S = ( U , A T D T , F , G ) be an inconsistent decision information system. A A T , D D T , if R A and R D are dominance relations corresponding to condition attribute set A and decision attributes set D in IDIS, then it is called an inconsistent ordered decision information system (IODIS).
In the following, the abbreviation of an inconsistent order decision information system (IODIS) will be represented by S for brevity.
Proposition 1.
Let S = ( U , A T D T , F , G ) be an inconsistent ordered decision information system, A A T , x i , x j U . Then the following results hold
(1) 
R A R A T , [ x i ] A [ x i ] A T ;
(2) 
x j [ x i ] A , x i [ x j ] A [ x j ] A = [ x i ] A ;
(3) 
[ x i ] A = { [ x j ] A | x j [ x i ] A } ;
(4) 
[ x i ] A = [ x j ] A f ( x i , a ) = f ( x j , a ) ( a A ) .
If a binary relation is no longer an equivalence relation but a dominance relation, the dominance class constitutes a cover of the universe and no longer forms a partition.

2.2. Lower Approximation Reduction in an IODIS

In an inconsistent information system, condition attributes are screened out by the method of lower approximation reduction. Meanwhile, it is better to mine the hidden information and discover the rules. Moreover, deletion of unnecessary condition attributes also can save space. Let S = ( U , A T D T , F , G ) be an inconsistent ordered decision information system, A A T , D D T , and R A , R D are dominance relations with respect to A and D. We denote that in the following
U / R A = { [ x i ] A | x i U } ,
U / R D = { D 1 , D 2 , , D r } ,
η ̲ A = ( R A ̲ ( D 1 ) , R A ̲ ( D 2 ) , , R A ̲ ( D r ) ) .
where η ̲ A is called a lower approximation function based on R A .
Definition 3.
Let S = ( U , A T D T , F , G ) be an IODIS and A A T . If η ̲ A = η ̲ A T , then A is called a lower approximation coordination set based on dominance relation R A T . Moreover, if B A , it does not satisfy that η ̲ B = η ̲ A . Then A is called a lower approximation reduction in an IODIS.
According to the above Definition 3, it is easy to get the conclusions included in Theorem 1.
Theorem 1.
Let S = ( U , A T D T , F , G ) be an IODIS and A A T . Then
A is a lower approximation coordination set D i U / R D , then
R A ̲ ( D i ) = R A T ̲ ( D i ) ( i = 1 , 2 , , r ) .
Definition 4.
Let S = ( U , A T D T , F , G ) be an IODIS and D i U / R D ( i = 1 , 2 , , r ) . Then denote
D A T η ̲ = { ( x i , x j ) | x i R A T ̲ ( D i ) , x j R D ̲ ( D i ) ,
D A T η ̲ ( x i , x j ) = { a k A T | f ( x i , a k ) > f ( x j , a k ) } ( x i , x j ) D A T η ̲ ( x i , x j ) D A T η ̲ .
Therefore, D A T η ̲ ( x i , x j ) is called a lower approximation identifiable attribute set under the dominance relation R A T with respect to objects x i , x j in an IODIS. Moreover, combining the lower approximation identifiable attribute set of all objects, the lower approximation identifiable matrix is defined as
D i s A T η ̲ = ( D A T η ̲ ( x i , x j ) | x i , x j U ) | U | × | U | .
Theorem 2.
Let S = ( U , A T D T , F , G ) be an I O D I S and A A T . Then
η ̲ A = η ̲ A T ( x i , x j ) D A T η ̲ , t h e n A D A T η ̲ ( x i , x j ) .
Definition 5.
Let S = ( U , A T D T , F , G ) be an IODIS and the lower approximation identifiable matrix is D i s A T η ̲ based on dominance relation R A T . Then the lower approximation identifiable formula can be defined as
M A T η ̲ = { { a k | a k D A T η ̲ ( x i , x j ) } , ( x i , x j ) D A T η ̲ } .
Theorem 3.
Let S = ( U , A T D T , F , G ) be an IODIS. Then the minimal disjunctive normal form of identifiable formula M A T η ̲ is
M i n η ̲ = k = 1 p ( s = 1 q k a s ) .
Meanwhile, B k = { a s | s = 1 , 2 , , q k } , then { B k | k = 1 , 2 , , p } are the sets of all lower approximation reductions based on dominance relation R A T .

2.3. Discernibility Information Tree

Discernibility information tree is an ordered tree which is sorted from left to right according to the order of condition attributes and is a compressed storage method based on an identifiable attribute set of identifiable matrices. It eliminates the storage space of duplicated elements and reduces unnecessary space waste. The characteristics of the discernibility information tree are as follows [17]:
(1)
The subtree of the discernibility information tree is also an ordered tree, which is arranged from left to right in the order of the condition attribute set.
(2)
Each node of the discernibility information tree is composed of four parts: prefix pointer, successor pointer, node name, pointer with the same name. The prefix pointer points to previous level node (i.e., the parent node) of this node, and the subsequent pointer points to the successor node (i.e., the child node) of this node. The node name marks the condition attribute name corresponding to the node, and the same name pointer points to the node in the discernibility information tree that has the same node name in other paths.
(3)
Each node in the discernibility information tree has at most | A T | child nodes, where | A T | is the number of condition attributes in the ordered decision information system.

3. The Method of the Lower Approximation Reduction Based on Discernibility Information Tree in an IODIS

In this section, combined with the lower approximation identifiable matrix, a discernibility information tree algorithm based on the lower approximation identifiable matrix is given under a dominance relation.
Discernibility information tree is a virtual tree structure. It does not need to process data one by one, but only the nodes in the tree through the tree structure. Each set of identifiable attributes is taken as a branch to form a path to achieve the goal. Then, the same set of identifiable attributes is stored in the same path, which reduces the waste of storage space. In addition, the set of identifiable attributes with the same prefix is mapped to the path corresponding to the smallest set of identifiable attributes. Without extending the path, the space occupied by redundant nodes is reduced.
It can be seen from Algorithm 1 that the strategies of non-extended path and of deleting subtrees are used in constructing the discernibility information tree based on the lower approximation identifiable matrix in the ordered decision information system. Through the construction of the discernibility information tree, the following characteristics are summed up. First, map the same lower approximation identifiable attribute set to the same path. Then, map an identifiable attribute set with the same prefix to the path corresponding to the smallest identifiable attribute set. For example, for identifiable attribute sets { a , b , c , d , e } and { a } , which have the same prefix a, the smallest identifiable attribute set { a } is chosen to map to the path < a > . Finally, identifiable attribute sets in lower approximation identifiable matrices have shared prefixes, such as { b , c , d , e } and { b , d } , with shared prefixes b. It can compress and store the lower approximation identifiable matrix and reduce the space-time complexity of constructing the discernibility information tree based on the lower approximation identifiable matrix.
Algorithm 1: The algorithm of discernibility information tree in an inconsistent ordered decision information system.
Symmetry 10 00696 i001
To introduce the algorithm step more clearly and intuitively, the flow chart is used to better explain how the Algorithm 1 is carried out (in Figure 1).
In the following, based on the lower approximation identifiable matrix, the properties and proofs of the discernibility information tree are given.
Theorem 4.
The discernibility information tree based on the lower approximation identifiable matrix includes all the condition attributes that are needed to obtain the lower approximation reduction of the inconsistent ordered decision information system.
Proof. 
Suppose P S be the lower approximation identifiable attribute sets corresponding to all paths in the discernibility information tree. The lower approximation identifiable matrix is expressed by D i s A T η ̲ . It is easy to know P S D i s A T η ̲ . For any x , y U , it satisfies that D A T η ̲ ( x , y ) D i s A T η ̲ . There are a ( x , y ) U × U and D A T η ̲ ( x , y ) P S such that D A T η ̲ ( x , y ) D A T η ̲ ( x , y ) . It is known from the lower approximation identifiable matrix that D A T η ̲ ( x , y ) D A T η ̲ ( x , y ) = D A T η ̲ ( x , y ) . Therefore, a discernibility information tree based on the lower approximation identifiable matrix includes all the condition attributes that are needed to obtain the lower approximation reduction of the inconsistent ordered decision information system. □
Theorem 5.
In the discernibility information tree based on the lower approximation identifiable matrix, the union of identifiable attribute sets which are corresponding to paths with only one node constitutes the core C o r e D T ( A T ) of the condition attribute set in the decision information system.
Proof. 
According to the discernibility information tree based on lower approximation identifiable matrix, if the attribute name of a node in the information tree is a, there exists a path in the discernibility information tree that only includes the node a, and then there exists a path corresponding to the lower approximation identifiable attribute set { a } . In the lower approximation identifiable matrix, if a A T and { a } D i s A T η ̲ , then it is said that a is a necessary relative to the condition attribute set A T . The collection of all necessary attributes of A T constitutes the relative core of A T relative to decision attribute set D T . Thus, the theorem has been proved. □
Theorem 6.
Let A be a condition attribute set represented by all the children of the root node in the discernibility information tree. Then η ̲ A = η ̲ A T in an IODIS.
Proof. 
Let P S be an identifiable attribute set representing all paths based on the lower approximation identifiable matrix in discernibility information tree. Theorem 4 shows that the discernibility information tree based on the lower approximation identifiable matrix contains all the attributes that are needed to obtain the lower approximation reduction of the inconsistent ordered decision information system. For any D A T η ̲ ( x , y ) P S , there is D A T η ̲ ( x , y ) A . Therefore, according to Theorem 2 of the lower approximation coordination set, we can see that η ̲ A = η ̲ A T . □
For a given inconsistent decision information system S = ( U , A T D T , F , G ) based on dominance relation, we derive that the number of objects is | U | and that the number of condition attributes is | A T | . Then the number of a subset of a non-empty condition attribute set can be obtained is | U | 2 at most in a lower approximation identifiable matrix. Assuming that the number of non-empty subsets of a condition attribute set in the identifiable matrix is P and P | U | 2 . As can be seen from the process of constructing discernibility information tree based on lower approximation identifiable matrix, a discernibility information tree can have at most P different paths and there are at most A T nodes in each path. Therefore, there are at most | A T | × P nodes in a discernibility information tree. Because there are many paths sharing prefixes in the discernibility information tree based on the lower approximation identifiable matrix, the actual number of nodes in the discernibility information tree is much less than | A T | × P . Thus, in the worst case, the spatial complexity of the discernibility information tree based on the lower approximation identifiable matrix is O ( | A T | | U | 2 ) .
In addition, the algorithm iterates at most | U | 2 times. Meanwhile, during each iteration, it inserts at most A T nodes and deletes N i nodes ( i = 1 , 2 , , | U | 2 ) . Thus, the time complexity of the algorithm is O | A T | | U | 2 + ( N 1 + N 2 + + N | U | 2 ) . Since the discernibility information tree contains up to | A T | | U | 2 nodes, | A T | | U | 2 is a maximum value of ( N 1 + N 2 + + N | U | 2 ) . Therefore, the time complexity of the discernibility information tree based on the lower approximation identifiable matrix is O ( | A T | | U | 2 ) .
After constructing the discernibility information tree based on the lower approximation identifiable matrix, it avoids the classification and sorting of many identifiable attribute sets. Based on an ordered tree, a concise and intuitive path map is established, which not only realizes the compression and storage of data, but also makes the lower approximation reduction for the inconsistent ordered decision information system. In the following, a complete algorithm of lower approximation reduction is given based on the discernibility information tree.
Based on the analysis of the time complexity of Algorithm 1, we can know that there are at most | A T | | U | 2 nodes in the differential information tree. According to the construction process of Algorithm 2, we can know that the maximum number of iterations is | A T | times. Assuming that N i ( i = 1 , 2 , , | A T | ) nodes are deleted during each iteration, the maximum number of nodes deleted after | A T | times iteration is ( N 1 + N 2 + + N | A T | ) = | A T | | U | 2 . Thus, the complexity of Algorithm 2 is O ( | A T | | U | 2 ) .
Algorithm 2: The algorithm of lower approximation reduction based on the discernibility information tree in an IODIS.
Symmetry 10 00696 i002
The flow chart in Figure 2 illustrates the implementation process of Algorithm 2 more directly, which is helpful to understand the operation of the program.
At present, there are many methods for attribute reduction of decision tables, such as the attribute importance measure reduction algorithm. Based on the discernibility matrix of the decision table, the most important attribute is selected to be added to the core attribute set according to the importance measure of the attribute, and then the relative attribute reduction of the decision table is obtained. However, in the case of large data sets, the importance measure of computing all attributes greatly increases the time complexity of the algorithm. Similarly, relative reduction algorithm based on conditional entropy also has some shortcomings. When computing conditional entropy or information gain, floating-point calculation is needed many times, which greatly increases the time complexity of the algorithm and reduces the efficiency of the algorithm. Currently, the most commonly used algorithm is the reduction algorithm based on discernibility matrix. The basic method of this algorithm is to obtain identifiable functions and simplify it to become a disjunctive normal form. The algorithm can guarantee the completeness of the algorithm. However, in the process of simplification, if there are many repeated objects and too many comparisons between objects, the complexity of time and space will be increased. Moreover, the heuristic algorithm based on the discernibility matrix is mainly based on the discernibility matrix to find the core, and then, according to the heuristic rule, add the attributes until the condition is satisfied. Although the time performance of this algorithm is improved greatly, the complete reduction cannot be guaranteed. In this paper, by establishing a virtual tree structure, duplicate identifiable attribute sets are mapped to the same path to reduce the space occupation of repetitive attributes. At the same time, the redundant parent set elements are deleted by sharing a prefix, and the discernibility matrix is compressed and stored. This method not only saves space and reduces the time complexity, but also realizes the completeness of the algorithm.
Moreover, we will prove the completeness of Algorithm 2. In an inconsistent ordered decision information system S = ( U , A T D T , F , G ) , the following two conditions are required to prove the completeness of the Algorithm 2 ( A A T ).
  • η ̲ A = η ̲ A T ;
  • a A , then η ̲ A { a } η ̲ A T .
According to the equivalent condition of the lower approximation reduction, the above conditions can be transformed into the following forms by Theorem 2.
  • ( x i , x j ) D A T η ̲ , then A D A T η ̲ ( x i , x j ) ;
  • a A , ( x i , x j ) D A T η ̲ such that ( A { a } ) D A T η ̲ ( x i , x j ) = .
The above two expressions will prove the completeness of the reduction by the lower approximation identifiable set. Because the discernibility information tree based on the lower approximation identifiable matrix contains all the condition attributes needed for lower approximation reduction, the above two formulas are transformed into the following.
  • M D S , M A ;
  • a A , M D S such that T ( A { a } ) = .
where D S represents a set of all paths in a discernibility information tree based on the lower approximation identifiable matrix.
Therefore, proving that the completeness of Algorithm 2 is equivalent to proving that the two formulas above are established. According to Algorithm 2, M A for any M D S . Based on Theorem 5, in the discernibility information tree, the union of identifiable attribute sets corresponding to paths with only one node constitutes the core C o r e D T ( A T ) of the condition attribute set of the decision information system. The core C o r e D ( A T ) of the decision table obtained in the third step of the Algorithm 2 is taken as part of the lower approximation reduction and all paths containing elements of core are deleted from the discernibility information tree. Suppose S = A C o r e D T ( A T ) . If b is the rightmost element of S, the child node on the rightmost side of the root node must be b in the current discernibility information tree, and the subtree that takes this node b as its root does not contain any node corresponding to any attribute in S { b } . Therefore, for b A , M D S such that M ( A { b } ) = . Similarly, other elements in S also satisfy this condition. Thus, the lower approximation reduction combined the discernibility information tree is a complete reduction.

4. An Illustrative Example

In the following, we will construct the discernibility information tree under the dominance relation according to the steps of Algorithm 1 and implement the lower approximation reduction based on this tree in the inconsistent ordered decision information system. An inconsistent ordered decision table is given in Table 1, in which the object set is U = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 } , a condition attribute set A T = { a , b , c , d , e } and a decision attribute set is D = { d e c i s i o n } .
First, after a simple calculation, we can get the dominance classes based on the dominance relation A T .
[ x 1 ] A T = { x 1 , x 5 , x 6 , x 7 , x 8 } , [ x 2 ] A T = { x 2 , x 8 , x 9 } ,
[ x 3 ] A T = { x 1 , x 2 , x 3 , x 5 , x 6 , x 7 , x 8 , x 9 } ,   [ x 4 ] A T = { x 2 , x 4 , x 8 , x 9 } ,
[ x 5 ] A T = { x 5 , x 6 } , [ x 6 ] A T = { x 5 , x 6 } ,
[ x 7 ] A T = { x 5 , x 6 , x 7 , x 8 , x 9 } , [ x 8 ] A T = { x 8 , x 9 } ,
[ x 9 ] A T = { x 9 } , [ x 10 ] A T = { x 10 } .
Based on decision attribute set D, decision classes are
  • D 1 = [ x 3 ] D = [ x 5 ] D = [ x 6 ] D = [ x 9 ] D = { x 3 , x 5 , x 6 , x 9 } ,
  • D 2 = [ x 2 ] D = [ x 8 ] D = { x 2 , x 3 , x 5 , x 6 , x 8 , x 9 } ,
  • D 3 = [ x 1 ] D = [ x 7 ] D = { x 1 , x 2 , x 3 , x 5 , x 6 , x 7 , x 8 , x 9 } ,
  • D 4 = [ x 4 ] D = [ x 7 ] D = [ x 10 ] D = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 } .
According to the above dominance classes, we can obtain R A T R D . Thus, it is an inconsistent ordered decision information system. Next, we get lower approximations with respect to D i ( i = 1 , 2 , , r )
  • R A T ̲ ( D 1 ) = { x 5 , x 6 , x 9 }
  • R A T ̲ ( D 2 ) = { x 2 , x 5 , x 6 , x 8 , x 9 }
  • R A T ̲ ( D 3 ) = { x 1 , x 2 , x 3 , x 5 , x 6 , x 7 , x 8 , x 9 }
  • R A T ̲ ( D 4 ) = { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 }
Then, based on the lower approximation identifiable attribute set D A T η ̲ ( x i , x j ) , the lower approximation identifiable matrix is shown in Table 2.
Furthermore, the root node of the discernibility information tree based on the lower approximate identifiable matrix is created. According to each subset of condition attribute set in the lower approximation identifiable matrix, the attributes in the subset are ordered from left to right in the ordered decision information system.
Next, a lower approximation identifiable attribute set of the lower approximation identifiable matrix is selected, and the corresponding path is created. Here we select the first subset { a , b , c , d , e } as the first path < a , b , c , d , e > and insert the discernibility information tree. Since the second identifiable attribute set is also { a , b , c , d , e } , it is also mapped to the first path. In the same way, the same identifiable attribute set is mapped to the same path. For another identifiable attribute set { a } in the lower approximation identifiable matrix, its corresponding path < a > is established. Since it is included in the first path, the strategy of deleting subtree is adopted. Delete all nodes b , c , d after a, but keep node a. That is, the original path < a , b , c , d , e > is modified to the path < a > . Next, another condition attribute set { b , c , d , e } maps the new path < b , c , d , e > and creates a new node b. As for the lower approximation identifiable attribute set { b , e } , the corresponding path < b , e > is constructed, which has a shared prefix b with the previous path < b , c , d , e > . Afterwards, we insert the path < c , d > corresponding to the condition attribute set { c , d } into the discernibility information tree. Similarly, we repeat the above procedure until inserting the last path into the discernibility information tree. Finally, Figure 3 shows a lower approximation discernibility information tree based on Algorithm 1 and Table 2.
In Algorithm 2, we will give a lower approximation reduction under the discernibility information tree. Before that, we first give all the results of reduction through the original method. According to Definition 5 and Theorem 2, the minimum disjunctive normal form is obtained by using conjunctive and disjunctive methods. Then, all the lower approximation reductions are obtained.
( a b c d e ) ( b c d e ) ( c d ) ( b e ) a = ( c d ) ( b e ) a = ( a b c ) ( a b d ) ( a c e ) ( a d e )
Therefore, lower approximation reductions are { a , b , c } , { a , b , d } , { a , c , e } , { a , d , e } . Next, combined with Algorithm 2, a lower approximation reduction method based on the discernibility information tree is presented. The specific steps are as follows.
(1)
According to the first step of Algorithm 2, we first establish an empty set A.
(2)
Based on the lower approximation identifiable matrix, a path with only one node { a } is selected in the discernibility information tree. Then, delete all the paths that contains only one node { a } . That means removing the path < a > .
(3)
A = A { a } .
(4)
Choose the right child node c of the root node in the discernibility information tree and A = A { c } .
(5)
Delete paths < b , c , d , e > and < c , d > that contain the node c.
(6)
At this point, there is only one path < b , e > on the discernibility information tree. Afterwards, select the node b and delete the path < b , e > . Finally, A = A { b } .
(7)
At this time, the root node of the discernibility information tree has no child nodes. Thus, the algorithm is finished. A lower approximation reduction based on discernibility information tree is A = { a , b , c } .
According to the above method, the lower approximation reduction is obtained. By comparing it to the original results, the correctness of the method is verified, and its effectiveness is demonstrated. The method of the discernibility information tree reduces the space occupation of redundant attributes and reduces the computational load.

5. Conclusions

In this paper, a data structure of ordered decision information system based on discernibility information tree is proposed, which can compress and store lower approximation identifiable attribute sets. Compared with the minimal disjunctive normal form of the lower approximation identifiable formulas, the spatial complexity of the lower approximation reduction of inconsistent ordered information systems is greatly reduced by using this discernibility information tree. However, based on the construction of the discernibility information tree, the order of attributes inserted in the discernibility information tree is the original order of attributes in the decision table, without considering the influence of attribute importance measure on the construction of the discernibility information tree. Therefore, our next step is to improve the construction of the discernibility information tree by combining the importance measure of attributes and the core.

Author Contributions

Conceptualization, Methodology and Formal analysis, J.Z. and X.Z.; Validation, J.Z. and W.X.; Investigation, Supervision, Project administration and Funding acquisition, X.Z. and W.X.

Funding

This work is supported by the National Natural Science Foundation of China (No. 61472463, No. 61402064, No. 61772002), and the Science and Technology Research Program of Chongqing Municipal Education Commission (KJ1709221).

Acknowledgments

The authors thank the anonymous reviewers’ constructive suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pawlak, Z. Rough set. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  2. Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1992. [Google Scholar]
  3. Li, W.T.; Pedrycz, W.; Xue, X.P.; Xu, W.H.; Fan, B.J. Distance-based double-quantitative rough fuzzy sets with logic operations. Int. J. Approx. Reason. 2018, 101, 206–233. [Google Scholar] [CrossRef]
  4. Li, W.T.; Pedrycz, W.; Xue, X.P.; Xu, W.H.; Fan, B.J. Fuzziness and incremental information of disjoint regions in double-quantitative decision-theoretic rough set model. Int. J. Mach. Learn. Cybern. 2018. [Google Scholar] [CrossRef]
  5. Xu, W.H.; Yu, J.H. A novel approach to information fusion in multi-source datasets: A granular computing viewpoint. Inf. Sci. 2017, 378, 410–423. [Google Scholar] [CrossRef]
  6. Xu, W.H.; Pang, J.Z.; Luo, S.Q. A novel cognitive system model and approach to transformation of information granules. Int. J. Approx. Reason. 2014, 55, 853–866. [Google Scholar] [CrossRef]
  7. Xu, W.H.; Li, W.T. Granular Computing Approach to Two-Way Learning Based on Formal Concept Analysis in Fuzzy Datasets. IEEE Trans. Cybern. 2016, 46, 366–379. [Google Scholar] [CrossRef] [PubMed]
  8. Medina, J.; Ojedaaciego, M. Multi-adjoint t-concept lattices. Inf. Sci. 2010, 180, 712–725. [Google Scholar] [CrossRef]
  9. Kumar, A.; Kumar, D.; Jarial, S.K. A hybrid clustering method based on improved artificial bee colony and fuzzy C-Means algorithm. Int. J. Artif. Intell. 2017, 15, 40–60. [Google Scholar]
  10. Pozna, C.; Minculete, N.; Precup, R.E.; Kóczy, L.T.; Ballagi, A. Signatures: Definitions, operators and applications to fuzzy modelling. Fuzzy Sets Syst. 2012, 201, 86–104. [Google Scholar] [CrossRef]
  11. Jankowski, J.; Kazienko, P.; Wątróbski, J.; Lewandowska, A.; Ziemba, P.; Zioło, M. Fuzzy multi-objective modeling of effectiveness and user experience in online advertising. Exp. Syst. Appl. 2016, 65, 315–331. [Google Scholar] [CrossRef]
  12. Jeon, G.; Kim, D.; Jeong, J. Rough sets attributes reduction based expert system in interlaced video sequences. IEEE Trans. Consum. Electron. 2006, 52, 1348–1355. [Google Scholar] [CrossRef]
  13. Duntsh, I.; Gediga, G. Uncertainty measures of rough set prediction. Artif. Intell. 1998, 106, 109–113. [Google Scholar] [CrossRef]
  14. Hu, X.; Cercone, N. Learning in relational databases: A rough set approach. Int. J. Comput. Intell. 2010, 11, 323–338. [Google Scholar] [CrossRef]
  15. Pedrycz, W.; Bargiela, A. Granular clustering: A granular signature of data. IEEE Trans. Syst. Man Cybern. Part B 2002, 32, 212–224. [Google Scholar] [CrossRef]
  16. Qian, Y.H.; Li, S.Y.; Liang, J.Y.; Shi, Z.; Wang, F. Pessimistic rough set based decisions: A multigranulation fusion strategy. Inf. Sci. Int. J. 2014, 264, 196–210. [Google Scholar] [CrossRef]
  17. Pedrycz, W. Granular Computing Analysis and Design of Intelligent Systems; CRC Press Taylor and Francis Group: Boca Raton, FL, USA, 2013. [Google Scholar]
  18. Yao, Y.; Zhao, Y. Attribute reduction in decision-theoretic rough set models. Inf. Sci. 2008, 178, 3356–3373. [Google Scholar] [CrossRef] [Green Version]
  19. Xu, W.; Li, W.; Luo, S. Knowledge reductions in generalized approximation space over two universes based on evidence theory. J. Intell. Fuzzy Syst. 2015, 28, 2471–2480. [Google Scholar] [CrossRef]
  20. Guo, Y.T.; Xu, W.H. Attribute Reduction in Multi-source Decision Systems. In Proceedings of the International Joint Conference on Rough Sets, Santiago de Chile, Chile, 7–11 October 2016; Springer: Cham, Switzerland, 2016; pp. 558–568. [Google Scholar]
  21. Yang, X.; Yang, J.; Wu, C.; Yu, D. Dominance-based rough set approach and knowledge reductions in incomplete ordered information system. Inf. Sci. 2008, 178, 1219–1234. [Google Scholar] [CrossRef]
  22. Xu, W.; Shao, M.; Zhang, W. Knowledge Reduction Based on Evidence Reasoning Theory in Ordered Information Systems. Knowl. Eng. Manag. 2006, 4092, 535–547. [Google Scholar]
  23. Skowron, A.; Rauszer, C. The Discernibility Matrices and Functions in Information Systems. Intell. Decis. Support 1992, 11, 331–362. [Google Scholar]
  24. Liang, J.Y.; Chin, K.S.; Dang, C.Y.; Yam, R.C.M. A new method for measuring uncertainty and fuzziness in rough set theory. Int. J. Gen. Syst. 2002, 31, 331–342. [Google Scholar] [CrossRef] [Green Version]
  25. Cao, F.Y.; Liang, J.Y.; Qian, Y.H. Decision table reduction based on information entropy. Comput. Appl. 2005, 25, 2630–2631. [Google Scholar]
  26. Hu, X.G.; Xue, F.; Zhang, Y.H.; Zhang, J. Attribute reduction method of decision table based on concept lattice. Pattern Recognit. Artif. Intell. 2009, 22, 624–629. [Google Scholar]
  27. Jiang, Y.; Wang, X.; Zhen, Y. Attribute reduction algorithm of rough sets based on discernibility matrix. J. Syst. Simul. 2008, 20, 3717–3720. [Google Scholar]
  28. Yang, M.; Yang, P. A novel condensing tree structure for rough set feature selection. Neurocomputing 2008, 71, 1092–1100. [Google Scholar] [CrossRef]
  29. Dai, J.H.; Hu, H.; Wu, W.Z.; Qian, Y.; Huang, D. Maximal-discernibility-pair-based approach to attribute reduction in fuzzy rough sets. IEEE Trans. Fuzzy Syst. 2018, 26, 2174–2187. [Google Scholar] [CrossRef]
  30. Qian, Y.; Liang, J.; Pedrycz, W.; Dang, C. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef]
  31. Hu, B.Q.; Zhang, L.J.; Zhou, Y.C.; Pedrycz, W. Large-scale multimodality attribute reduction with multi-kernel fuzzy rough sets. IEEE Trans. Fuzzy Syst. 2018, 26, 226–238. [Google Scholar] [CrossRef]
  32. Zhang, W.D.; Wang, X.; Yang, X.B.; Chen, X.; Wang, P. Neighborhood attribute reduction for imbalanced data. In Granular Computing; Springer: New York, NY, USA, 2018; pp. 1–11. [Google Scholar]
  33. Li, F.; Jin, C.; Yang, J. Roughness measure based on description ability for attribute reduction in information system. Int. J. Mach. Learn. Cybern. 2018. [Google Scholar] [CrossRef]
  34. Xie, X.J.; Qin, X.L. A novel incremental attribute reduction approach for dynamic incomplete decision systems. Int. J. Approx. Reason. 2018, 93, 443–462. [Google Scholar] [CrossRef]
  35. Shi, Y.P.; Huang, Y.; Wang, C.Z.; He, Q. Attribute reduction based on the Boolean matrix. In Granular Computing; Springer: New York, NY, USA, 2018; pp. 1–10. [Google Scholar]
  36. Ge, H.; Li, L.S.; Xu, Y.; Yang, C. Quick general reduction algorithms for inconsistent decision tables. Int. J. Approx. Reason. 2017, 82, 56–80. [Google Scholar] [CrossRef]
  37. Wang, C.Z.; He, Q.; Shao, M.W.; Hu, Q. Feature selection based on maximal neighborhood discernibility. Int. J. Mach. Learn. Cybern. 2018, 9, 1929–1940. [Google Scholar] [CrossRef]
  38. Jiang, Y. Attribute reduction with rough set based on discernibility information tree. Control Decis. 2015, 30, 1531–1536. [Google Scholar]
Figure 1. The flow chart of discernibility information tree in an IODIS.
Figure 1. The flow chart of discernibility information tree in an IODIS.
Symmetry 10 00696 g001
Figure 2. The flow chart of lower approximation reduction based on the discernibility information tree in an IODIS.
Figure 2. The flow chart of lower approximation reduction based on the discernibility information tree in an IODIS.
Symmetry 10 00696 g002
Figure 3. The discernibility information tree in an IODIS.
Figure 3. The discernibility information tree in an IODIS.
Symmetry 10 00696 g003
Table 1. An inconsistent ordered decision table.
Table 1. An inconsistent ordered decision table.
Uabcde Decision
x 1 213232
x 2 402123
x 3 202124
x 4 401021
x 5 335454
x 6 335454
x 7 223242
x 8 434353
x 9 444364
x 10 145461
Table 2. A lower approximation identifiable matrix in an IODIS.
Table 2. A lower approximation identifiable matrix in an IODIS.
U x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10
x 1 { b , c , d , e } { a }
x 2 { a } { a } { c , d } { a } { a }
x 3 { c , d } { a }
x 4
x 5 { a , b , c , d , e } { b , c , d , e } { a , b , c , d , e } { b , c , d , e } { a , b , c , d , e } { c , d } { a }
x 6 { a , b , c , d , e } { b , c , d , e } { a , b , c , d , e } { b , c , d , e } { a , b , c , d , e } { c , d } { a }
x 7 { b , c , d , e } { a }
x 8 { a , b , c , d , e } { b , c , d , e } { b , c , d , e } { a , b , c , d , e } { a }
x 9 { a , b , c , d , e } { b , c , d , e } { a , b , c , d , e } { b , c , d , e } { a , b , c , d , e } { b , e } { a }
x 10

Share and Cite

MDPI and ACS Style

Zhang, J.; Zhang, X.; Xu, W. Lower Approximation Reduction Based on Discernibility Information Tree in Inconsistent Ordered Decision Information Systems. Symmetry 2018, 10, 696. https://doi.org/10.3390/sym10120696

AMA Style

Zhang J, Zhang X, Xu W. Lower Approximation Reduction Based on Discernibility Information Tree in Inconsistent Ordered Decision Information Systems. Symmetry. 2018; 10(12):696. https://doi.org/10.3390/sym10120696

Chicago/Turabian Style

Zhang, Jia, Xiaoyan Zhang, and Weihua Xu. 2018. "Lower Approximation Reduction Based on Discernibility Information Tree in Inconsistent Ordered Decision Information Systems" Symmetry 10, no. 12: 696. https://doi.org/10.3390/sym10120696

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop