Next Article in Journal
The Entropy of Deep Eutectic Solvent Formation
Previous Article in Journal
An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Grey Target Decision Method for Mixed Attributes Based on Kullback-Leibler Distance

School of Energy Science and Engineering, Henan Polytechnic University, Jiaozuo 454000, China
Entropy 2018, 20(7), 523; https://doi.org/10.3390/e20070523
Submission received: 14 May 2018 / Revised: 22 June 2018 / Accepted: 6 July 2018 / Published: 12 July 2018

Abstract

:
A novel generalized grey target decision method for mixed attributes based on Kullback-Leibler (K-L) distance is proposed. The proposed approach involves the following steps: first, all indices are converted into index binary connection number vectors; second, the two-tuple (determinacy, uncertainty) numbers originated from index binary connection number vectors are obtained; third, the positive and negative target centers of two-tuple (determinacy, uncertainty) numbers are calculated; then the K-L distances of all alternatives to their positive and negative target centers are integrated by the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method; the final decision is based on the integrated value on a bigger the better basis. A case study exemplifies the proposed approach.

1. Introduction

The grey target decision method has been studied by many scholars since it was proposed by Deng [1]. Following the further research on decision-making, the indices of alternatives are extended from pure real values to mixed attribute values. Thus, this mixed attribute based grey target decision method is proposed to make it more applicable. The core of the grey target decision method is to obtain the target center distances and the alternatives to their target center, as the basis for decision-making. The certain number-based grey target decision method calculates the target center distance by distance method such as Euclidean distance and Mahalanobis distance [2,3]. The reported mixed attribute grey target decision method deals with target center distance in two ways: one is by distance including mainly Euclidean distance and other similar distances [4,5,6,7,8,9]; the other method is by vector-based distance, such as the generalized grey target decision method [10,11]. The generalized grey target decision method is different from the conventional one in that, during the calculation process, it obeys the principle of the conventional grey target decision method [10,11,12,13]. The tool for measuring the uncertainty of fuzzy numbers in mixed attribute based grey target decision method is needed to make decision-making more valuable in terms of its theoretical significance and practical application. Entropy is often used to measure uncertainty; thus, it is sure to be applied to the generalized grey target decision method involving fuzzy numbers. However, the Kullback-Leibler distance (K-L distance), originated from cross-entropy, has the ability to reflect the similarity of two discrete random distributions [14]. Now, cross-entropy has been widely used in many fields: Ioannis and George applied it to intuitionistic fuzzy information pattern recognition [15]. Li and Wu studied the alternative preference problem based on intuitionistic fuzzy cross-entropy [16]. Xia and Xu carried out group decision-making which comprises intuitionistic fuzzy information [17]. Smieja and Geiger studied the cluster problem confined by information using cross-entropy [18]. Tang et al. proposed an optimization algorithm based on cross-entropy [19].
The principle of this proposed approach goes as follows: all indices of alternatives are first converted into binary connection number vectors and also divided into those of deterministic terms and uncertain terms based on the previous method. Then the deterministic terms and uncertain terms of positive target centers and negative target centers under each attribute can be obtained. Next, the two-tuple (determinacy, uncertainty) numbers originated from index binary connection number vectors are deduced. Following that, the K-L distances of all alternatives to their positive and negative target centers are integrated by using the TOPSIS method: the final decision is based on the integrated value for which the bigger is the better.

2. Basic Theory

2.1. Fuzzy Number

Definition 1.
Let R be a real domain; if x ˜ denotes a fuzzy number, [xL, xU], [xL, xM, xU] and [xL, xM, xN, xU] are the expressions of x ˜ called the interval number, triangular fuzzy number and trapezoidal fuzzy number, respectively, where xL, xM, xN and xU satisfy 0 < xL < xM < xN < xUR [20,21].

2.2. Binary Connection Number

Definition 2.
Let R be a real domain; A + Bi is called a binary connection number, where A represents the deterministic term, B is the uncertain term and i is a variable term unifying the determinacy and uncertainty of a fuzzy number and A, B ∈ R, i ∈ [−1, 1].
Definition 3.
x ¯ and v are the mean value and deviation value of the n (n ≥ 2) parameters of x ˜ respectively, then:
u ( x ¯ , v ) = A + B i = x ¯ + v i ( i [ 1 , 1 ] )
is called a mean value-deviation value connection number. Where x ¯ , S , m s , and v are calculated by use of Equations (2)–(5):
x ¯ = 1 n j = 1 n x j
S = 1 ( n 1 ) j = 1 n ( x j x ¯ ) 2
m s = max { | x L x ¯ | , | x U x ¯ | }
v = min { S , m s }
where x j ( j = 1 , , n ) is the jth parameter of the fuzzy number x ˜ , x ¯ is the mean value of the parameters, S denotes the standard deviation of the parameters, ms is the maximum deviation of the parameters, v is the minimum of S and m s , xL and xU are the fuzzy number’s lower limits and upper limits, respectively [10,22].
Definition 4.
The mutual interaction of the mean value x ¯ and the deviation value v (standard deviation or maximum deviation) of the binary connection number u ( x ¯ , v ) can be mapped to the determinacy-uncertainty space (D-U space). If u ( x ¯ , v ) = x ¯ + v i represents the vector in D-U space, then i only denotes the signal of the uncertain term without representing the changeable value [20,21].
Figure 1 shows a D-U space. The U-axis represents the relative uncertainty measure, while the D-axis denotes the relative deterministic measure. As seen from Figure 1, x ¯ and S interact with each other and the space reflection is the vector O E ¯ from O to E and the degree of interaction represents the modulus of vector O E ¯ , denoted by r.

2.3. Kullback-Leibler Distance

Definition 5.
Kullback-Leibler distance [14,15]. Let X = ( x 1 , x 2 , , x m ) Τ and Y = ( y 1 , y 2 , , y m ) Τ be two vectors, where x j , y j 0 , j = 1 , 2 , , m , 1 = j = 1 m x j y j , then the K-L distance of X and Y is given by Equation (6).
H ( X , Y ) = j = 1 m x j ln x j y j
H ( X , Y ) exhibits the following characteristics:
(1) 
H ( X , Y ) = j = 1 m x j ln x j y j 0 ;
(2) 
H ( X , Y ) = j = 1 m x j ln x j y j = 0 , when and only, when x j = y j , j .
If x j 0 , y j = 0 then H ( X , Y ) . So, the original K-L distance needs to be improved. The revised version of the K-L distance is as follows:
K ( X , Y ) = H ( X , X + Y 2 ) = j = 1 n x j ln x j 1 2 ( x j + y j )
Definition 6.
Comprehensive weighted K-L distance. Let the symbols S = ( ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x m , y m ) ) Τ and E = ( ( p 1 , q 1 ) , ( p 2 , q 2 ) , , ( p m , q m ) ) Τ , refer to the vectors of two-tuple (determinacy, uncertainty) numbers, where x j , y j , p j , q j 0 , j = 1 , 2 , , m , are two-tuple (determinacy, uncertainty) numbers under the same attributes in S and E respectively. Denote the weight vector W by W = ( w 1 , w 2 , , w m ) Τ , w j > 0 , j = 1 , 2 , , m and assume that for the two-tuples S and E the following normalization condition is satisfied:
j = 1 m w j ( x j + y j ) j = 1 m w j ( p j + q j )
The comprehensive weighted K-L distance H W ( S , E ) can be calculated using the following equation:
H W ( S , E ) = j = 1 m w j ( x j ln x j p j + y j ln y j q j )
Then the function H W ( S , E ) has the following properties:
(1)
H W ( S , E ) 0 ;
(2)
H W ( S , E ) = 0 , when and only, when S = E , or what amounts to the same, x j = p j and y j = q j , j = 1 , 2 , , m ;
(3)
when x j = p j = 0 or y j = q j = 0 , then, by definition, x j ln x j p j = y j ln y j q j = 0 .
The assertions in (1) and (2) can be proved as follows. We assume that p j > 0 and q j > 0 for j = 1 , 2 , , m . In the following sequence of (in-) equalities we apply the convexity of the function u u ln u , u > 0 , or what amounts to the same the log-sum inequality, also called Gibb’s inequality:
H W ( S , E ) = j = 1 m w j ( x j ln x j p j + y j ln y j q j ) = ( k = 1 m w k p k ) j = 1 m w j p j k = 1 m w k p k x j p j ln x j p j + ( k = 1 m w k q k ) j = 1 m w j q j k = 1 m w k q k y j q j ln y j q j
(the function u u ln u , u > 0 , is convex)
( k = 1 m w k p k ) j = 1 m w j p j k = 1 m w k p k x j p j ln j = 1 m w j x j k = 1 m w k p k + ( k = 1 m w k q k ) j = 1 m w j q j k = 1 m w k q k y j q j ln j = 1 m w j y j k = 1 m w k q k = ( j = 1 m w j x j ) ln j = 1 m w j x j k = 1 m w k p k + ( j = 1 m w j y j ) ln j = 1 m w j y j k = 1 m w k q k .
Put x = j = 1 m w j x j , y = j = 1 m w j y j , p = k = 1 m w k p k and q = k = 1 m w k q k .
Then the inequality in property (1) implies
H W ( S , E ) x ln x p + y ln y q = ( p + q ) { p p + q x p ln x p + q p + q y q ln y q }
(apply once more the convexity of the function u u ln u , u > 0 )
( p + q ) x + y p + q ln x + y p + q = ( x + y ) ln x + y p + q .
The following inequality is true for u > 0 : ln u u 1 . Hence, we infer:
( x + y ) ln x + y p + q = ( x + y ) ln p + q x + y ( x + y ) { p + q x + y 1 } = x + y ( p + q ) 0 .
The final inequality follows from the normalization conditions on S and E . This shows the inequality in property (1). If H W ( S < E ) = 0 , then all the previous inequalities are in fact equalities. This can only be true provided x j = p j and y j = q j for j = 1 , 2 , , m . This is kind of a converse to the Jensen inequality, or as it is presented here the log-sum inequality or Gibb’s inequality. Observe that the proofs of properties (1) and (2) can also be adapted to the situation where some of p j ’s or some of the q j ’s are zero. Essentially speaking the same proof works by summing over those 1 j m for which p j 0 or for which q j 0 .
However, if the condition for S and E in Equation (8) is not satisfied, then H W ( S , E ) < 0 may occur, thus an improved version of which is given as follows.
K W ( S , E ) = j = 1 m w j ( x j | ln x j p j | + y j | ln y j q j | )
In Equation (10), K W ( S , E ) has the same characteristics as H W ( S , E ) but it can solve the special problem that the condition in Equation (8) is not satisfied.

3. Generalized Grey Target Decision Method for Mixed Attributes Based on the K-L Distance

Let C = { C 1 , C 2 , , C n } , A = { A 1 , A 2 , , A m } and W = ( w 1 , w 2 , , w m ) Τ be an alternative set, attribute set and weight vector of index attributes respectively, then the index of alternative C s under attribute A t is v s t ( s = 1 , 2 , , n ;   t = 1 , 2 , , m ) .

3.1. Transformation of Index Values into Binary Connection Numbers

Different types of index values can be converted into binary A + Bi connection numbers regarded as vectors in D-U space using Equations (1)–(5). It is noteworthy that the converted binary connection number for real number is of the form A + 0i, which means that the deterministic term is the real number itself and the uncertain term is 0i. The transformed index vector can be expressed as U s t = A s t + B s t i ( s = 1 , 2 , , n ;   t = 1 , 2 , , m ) .

3.2. Determination of the Target Centre Index Vectors

Having achieved the binary connection numbers converted from all index values, U s t = A s t + B s t i ( s = 1 , 2 , , n ;   t = 1 , 2 , , m ) , which can also be denoted by the two-tuple number U s t = ( A s t , B s t ) , ( s = 1 , 2 , , n ;   t = 1 , 2 , , m ) . The benefit type index set and cost type index set, are denoted by J+ and J, respectively. Then the positive and negative target center index vectors of two-tuple (determinacy, uncertainty) denoted by C t + and C t - can be obtained using Equations (11) and (12).
The positive target center index of two-tuple (determinacy, uncertainty) is as follows:
C t + = { ( max { A s t } , min { B s t } ) , U s t J + } ( min { A s t } , min { B s t } ) , U s t J } , s = 1 , 2 , , n ,   t = 1 , 2 , , m
The negative target center index of two-tuple (determinacy, uncertainty) is as follows:
C t - = { ( min { A s t } , max { B s t } ) , U s t J + } ( max { A s t } , max { B s t } ) , U s t J } , s = 1 , 2 , , n ,   t = 1 , 2 , , m
Equation (11) indicates that the positive target center index of two-tuple (determinacy, uncertainty) number under attribute A t is such that the index vector corresponding to the maximum term and minimum term for benefit-type indices and that of the minimum term and minimum term is used for cost-type indices. Equation (12) represents the fact that the negative target center index the two-tuple (determinacy, uncertainty) number under attribute A t is such that the index vector corresponding to the minimum term and maximum term is used for benefit-type indices and that of the maximum term and maximum term is used for cost-type indices.

3.3. Normalization of All Alternative Indices

The index vectors of all alternatives U s t = A s t + B s t i ( s = 1 , 2 , , n ;   t = 1 , 2 , , m ) and target center index vectors U c t = A c t + B c t i ( c = n + 1 ;   t = 1 , 2 , , m ) can be expressed as vectors of two-tuple (deterministic degree, uncertainty degree) numbers:
a s t = A s t A s t + B s t , b s t = B s t A s t + B s t , ( s = 1 , 2 , , n + 1 ;   t = 1 , 2 , , m )
In Equation (13), a s t and b s t denote respectively the deterministic degree and uncertainty degree under the same attribute in normalized binary connection numbers. Then the vector of two-tuple (deterministic degree, uncertainty degree) number can be given as ( ( a s 1 , b s 1 ) ,   ( a s 2 , b s 2 ) ,   ,   ( a s m ,   b s m ) ) Τ . It should be noted that a real number cannot be normalized in this step, or an error will occur when computing the uncertain term of real numbers as they are all zero under the same attribute.
The a s t and b s t in a two-tuple (deterministic degree, uncertainty degree) number ( a s t , b s t ) should be normalized further for they are incomparable under different attributes. The normalization equation is as follows:
a s t = a s t s = 1 n a s t , b s t = b s t s = 1 n b s t , s = 1 n ;   t = 1 m
In Equation (14), a s t and b s t are the normalized deterministic term and uncertainty term respectively in the two-tuple number.

3.4. Integration by TOPSIS Method

The closeness of comprehensive weighted K-L distance is used to judge the alternatives with the full consideration of the effects on each alternative on its positive and negative target centers. The TOPSIS method has been used extensively since it was proposed [23]. Let r i P and r i N represent, respectively, the positive comprehensive weighted K-L distance and negative comprehensive weighted K-L distance, then the closeness of the comprehensive weighted K-L distance can be obtained by using Equation (15):
C i = r i N r i P + r i N , i = 1 n .
The decision-making could be based on C i for which, the larger the better.

3.5. Decision-Making Steps

The procedure of generalized grey target decision method based on K-L distance is shown in Figure 2; the detailed steps therein are as follows:
(1)
All indices of alternatives are converted into binary connection number vectors and comprised of two-tuple (determinacy, uncertainty) numbers by using Equations (1)–(5).
(2)
The positive and negative target center indices of two-tuple (determinacy, uncertainty) number under all attributes are determined by using Equations (11) and (12).
(3)
All two-tuple (determinacy, uncertainty) numbers are transformed into two-tuple (deterministic degree, uncertainty degree) numbers by using Equation (13) and they can also be normalized using the linear method given in Equation (14).
(4)
The weights of all index attributes are calculated.
(5)
The comprehensive weighted K-L distances of normalized two-tuple (deterministic degree, uncertainty degree) numbers between all alternatives and the target center are calculated by using Equation (9) or Equation (10); then the closeness of all alternatives can be obtained by use of the TOPSIS method and Equation (15).
(6)
The decision-making is realized according to the closeness of each alternative for which, the larger the better.

4. Case Study

4.1. Data Resource

To evaluate tactical missiles, six indices including hit accuracy (km), warhead payload (kg), mobility (km·h−1), price (106 g), reliability and maintainability are denoted by A1 to A6 [8]. For all data types of attributes, A1 and A2 are real numbers, A3 and A4 are interval numbers and A5 and A6 are triangular fuzzy numbers. Among these attributes A1 and A4 are cost type indices and the others are benefit type indices. There are four feasible alternatives denoted by C1 to C4. The data are summarized in Table 1.

4.2. Decision-Making Process

4.2.1. Calculation of the Parameters of Binary Connection Number of All Alternatives

The parameters of binary connection number of all alternatives can be calculated from the data in Table 1 by using Equations (1)–(5): the results are shown in Table 2.

4.2.2. Translate All Index Values into Binary Connection Number Vectors

All index values can be transformed into index vectors using Equations (1)–(5) based on the data listed in Table 2. Table 3 lists the binary connection numbers as converted from all indices.
Then the two-tuple (determinacy, uncertainty) numbers shown in Table 4 are converted from index binary connection number vectors as shown in Table 3.

4.2.3. Determination of the Two-Tuple Numbers of Positive and Negative Target Centers

The vectors of two-tuple numbers of the positive target center are calculated as ((1.8, 0), (540, 0), (55.5, 0.5), (4.7, 0.5), (0.7, 0.1), (0.9, 0.1)) by using Equation (11).
The vectors of two-tuple numbers of the negative target center are obtained as ((2.5, 0), (480, 0), (35, 5), (5.5, 0.5), (0.3, 0.1), (0.5, 0.1)) by using Equation (12).

4.2.4. Normalization of the Two-Tuple (Deterministic Degree, Uncertainty Degree) Numbers

Two-tuple (deterministic degree, uncertainty degree) numbers of alternative indices and target center indices can be normalized by using Equation (14), the results are as summarized in Table 5.
In Table 5, a s t and b s t in ( a s t ,   b s t ) represent, respectively, the deterministic term and uncertain term of the same index. If an index is a real number, then b s t is zero, for example, the indices under attribute A1 and A2 are all real numbers. The symbols NCP and NCN denote the normalized two-tuple (deterministic degree, uncertainty degree) number of the positive target center and that of a negative target center respectively.

4.2.5. Determination of the Closeness of Comprehensive Weighted K-L Distances of Positive and Negative Centers

Given that the weight vector W = (0.1818, 0.2017, 0.1004, 0.2124, 0.1618, 0.1419), the comprehensive weighted K-L distances of all alternatives to their positive target center indices are KL+ = (0.0265, 0.1910, 0.0804, 0.1402) from Equation (10). The comprehensive weighted K-L distances of all alternatives to their negative target center indices are calculated as KL = (0.0607, 0.0152, 0.0615, 0.0394) by use of Equation (10). Here, Equation (10) is used to calculated the KL+ and KL, because the condition in Equation (8) is not satisfied provided the given data for KL. Then the closeness of positive K-L distance and negative K-L distance is calculated as TS = (0.6961, 0.0737, 0.4334, 0.2194) by the TOPSIS method using Equation (15). The final decision-making can be made in accordance with the closeness with the larger value being the better alternative as follows: S 1 > S 3 > S 4 > S 2 .

4.3. Analysis and Discussion

For comparison, W = (0.1818, 0.2017, 0.1004, 0.2124, 0.1618, 0.1419) is given; using the approach in [10] the comprehensive weighted proximity (CWP) is: ICWP = (0.2023, 0.2928, 0.2354, 0.2695). According to the rule stating that the smaller the proximity the better the alternative, the ranking of the alternatives is as follows: S 1 > S 3 > S 4 > S 2 . Table 6 summarizes the comparison of the proximity based method and the proposed method.
Table 6 lists the results calculated by two kinds of approaches: the K-L distance-based method and a vector-based method. The K-L distance-based method offers two ways in which to fulfil the decision-making task: one is to obtain the comprehensive weighted K-L distance based on the positive target center. The decision is made on the basis of the smaller the value the better. The other depends on the closeness of the two kinds of comprehensive weighted K-L distances such that all alternatives to their positive and negative target centers are covered. The ranking of the alternatives is on a larger-the-better basis. The vector-based method, which makes decision mainly by comprehensive weighted proximity, depends on a value for which the smaller the better. Through comparison, decision-making by the proposed method is in accordance with that by the vector-based method; however, the proposed method has one difference with the vector-based method in the principle governing its decision-making. The similarity of, and difference between, the two methods are analyzed next.
The two methods have a similarity: the proposed method and the method reported in [10] all transform different types of data into a binary connection number which can be handled in the same way. In brief, the binary connection number is the main tool used when dealing with mixed attribute values. The difference between the two methods is: the proposed method adopts the comprehensive weighted K-L distance to determine the ranking of alternatives, which makes the decision from the prospect of entropy as a measure of the uncertainty; while the method in [10] uses the comprehensive weighted proximity to determine this ranking, as it works from the viewpoint of the similarity of the vectors.

5. Conclusions

With this research, we arrive at the following conclusions:
(a)
A novel generalized grey target decision method is presented and this method uses the binary connection number and K-L distance as its bases.
(b)
The decision making is based on the comprehensive weighted K-L distance.
(c)
The calculation result is in agreement with the reported method; however, the proposed method makes its decision based on K-L distance, as this can measure the uncertainty therein. Thus, the proposed method is valuable with regard to its theoretical significance and benefits conferred in practical application.

Funding

This research was funded by the Doctoral Fund of Henan Polytechnic University Grant No. B2016-53.

Acknowledgments

The author is grateful to the editors and the anonymous reviewers for their comments and suggestions for improving the quality of this paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Deng, J.L. Grey System Theory; Huazhong University of Science and Technology Press: Wuhan, China, 2002. [Google Scholar]
  2. Dang, Y.G.; Liu, G.F.; Wang, J.P. Multi-attribute decision model of grey target considering weights. Stat. Decis. 2004, 3, 29–30. [Google Scholar]
  3. Wang, Z.X.; Dang, Y.G.; Yang, H. Improvements on decision method of grey target. Syst. Eng. Electron. 2009, 31, 2634–2636. [Google Scholar]
  4. Luo, D.; Wang, X. The multi-attribute grey target decision method for attribute value within three-parameter interval grey number. Appl. Math. Model. 2012, 36, 1957–1963. [Google Scholar] [CrossRef]
  5. Song, J.; Dang, Y.; Wang, Z.; Li, X. The decision-making model of harden grey target based on interval number with preference information on alternatives. J. Grey Syst. 2009, 21, 291–300. [Google Scholar]
  6. Guan, X.; Sun, G.D.; Yi, X.; Guo, Q. Hybrid multiple attribute recognition based on coefficient of incidence bull’s-eye-distance. Acta Aeronautica et Astronautica Sinica 2015, 36, 2431–2443. [Google Scholar]
  7. Zeng, B.; Liu, S.F.; Li, C.; Chen, J.M. Grey target decision-making model of interval grey number based on cobweb area. Syst. Eng. Electron. 2013, 35, 2329–2334. [Google Scholar]
  8. Shen, C.G.; Dang, Y.G.; Pei, L.L. Hybrid multi-attribute decision model of grey target. Stat. Decis. 2010, 12, 17–20. [Google Scholar]
  9. Dang, Y.G.; Liu, S.F.; Liu, B. Study on the multi-attribute decision model of grey target based on interval number. Eng. Sci. 2005, 7, 31–35. [Google Scholar]
  10. Ma, J.; Ji, C. Generalized grey target decision method for mixed attributes based on connection number. J. Appl. Math. 2014, 2014, 763543. [Google Scholar] [CrossRef]
  11. Ma, J. Generalized grey target decision method for mixed attributes with index weights containing uncertain numbers. J. Intell. Fuzzy Syst. 2018, 34, 625–632. [Google Scholar] [CrossRef]
  12. Ma, J. Grey target decision method for a variable target center based on the decision maker’s preferences. J. Appl. Math. 2014, 2014, 572529. [Google Scholar] [CrossRef]
  13. Ma, J.; Sun, J. Grey target decision method for positive and negative target centers based on decision maker’s preferences. Sci. Technol. Manag. Res. 2014, 34, 185–190. [Google Scholar]
  14. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  15. Vlachos, I.K.; Sergiadis, G.D. Intuitionistic fuzzy information-Applications to pattern recognition. Pattern Recognit. Lett. 2007, 28, 197–206. [Google Scholar] [CrossRef]
  16. Li, M.; Wu, C. A distance model of intuitionistic fuzzy cross entropy to solve preference problem on alternatives. Math. Probl. Eng. 2016, 2016, 8324124. [Google Scholar] [CrossRef]
  17. Xia, M.; Xu, Z. Entropy/cross entropy-based group decision making under intuitionistic fuzzy environment. Inf. Fusion 2012, 13, 31–47. [Google Scholar] [CrossRef]
  18. Śmieja, M.; Geiger, B.C. Semi-supervised cross-entropy clustering with information bottleneck constraint. Inf. Sci. 2017, 421, 254–271. [Google Scholar] [CrossRef] [Green Version]
  19. Tang, R.; Fong, S.; Dey, N.; Wong, R.K.; Mohammed, S. Cross entropy method based hybridization of dynamic group optimization algorithm. Entropy 2017, 19, 533. [Google Scholar] [CrossRef]
  20. Zhao, K.Q. The theoretical basis and basic algorithm of binary connection A +Bi and its application in AI. CAA I Trans. Intell. Syst. 2008, 3, 476–486. [Google Scholar]
  21. Zhao, K.Q. Decision making algorithm based on set par analysis for use when facing multiple uncertain in attributes. CAA I Trans. Intell. Syst. 2010, 5, 41–50. [Google Scholar]
  22. Ma, J.; Ji, C.; Sun, J. Fuzzy similar priority method for mixed attributes. J. Appl. Math. 2014, 2014, 304202. [Google Scholar] [CrossRef]
  23. Behzadian, M.; Otaghsara, S.K.; Yazdani, M.; Ignatius, J. A state-of the-art survey of TOPSIS applications. Expert Syst. Appl. 2012, 39, 13051–13069. [Google Scholar] [CrossRef]
Figure 1. Determinacy-uncertainty space [22].
Figure 1. Determinacy-uncertainty space [22].
Entropy 20 00523 g001
Figure 2. The K-L distance-based generalized grey target decision method.
Figure 2. The K-L distance-based generalized grey target decision method.
Entropy 20 00523 g002
Table 1. Index values of every alternative.
Table 1. Index values of every alternative.
SiA1A2A3A4A5A6
S12.0500[55, 56][4.7, 5.7][0.4, 0.5, 0.6][0.8, 0.9, 1.0]
S22.5540[30, 40][4.2, 5.2][0.2, 0.3, 0.4][0.4, 0.5, 0.6]
S31.8480[50, 60][5, 6][0.6, 0.7, 0.8][0.6, 0.7, 0.8]
S42.2520[35, 45][4.5, 5.5][0.4, 0.5, 0.6][0.4, 0.5, 0.6]
Table 2. Average values, standard deviations and maximum deviations of all indices.
Table 2. Average values, standard deviations and maximum deviations of all indices.
SiA1A2A3A4A5A6
S12.0/0/0500/0/055.5/0.7071/0.55.2/0.7071/0.50.5/0.1/0.10.9/0.1/0.1
S22.5/0/0540/0/035/7.0711/54.7/0.7071/0.50.3/0.1/0.10.5/0.1/0.1
S31.8/0/0480/0/055/7.0711/55.5/0.7071/0.50.7/0.1/0.10.7/0.1/0.1
S42.2/0/0520/0/040/7.0711/55/0.7071/0.50.5/0.1/0.10.5/0.1/0.1
Note: “a/b/c” in Table 2 denotes “average value/standard deviation/maximum deviation”.
Table 3. Index binary connection number vectors transformed from index values.
Table 3. Index binary connection number vectors transformed from index values.
SiA1A2A3A4A5A6
S12.0 + 0i500 + 0i55.5 + 0.5i5.2 + 0.5i0.5 + 0.1i0.9 + 0.1i
S22.5 + 0i540 + 0i35 + 5i4.7 + 0.5i0.3 + 0.1i0.5 + 0.1i
S31.8 + 0i480 + 0i55 + 5i5.5 + 0.5i0.7 + 0.1i0.7 + 0.1i
S42.2 + 0i520 + 0i40 + 5i5 + 0.5i0.5 + 0.1i0.5 + 0.1i
Table 4. Two-tuple numbers transformed from index binary connection number vectors.
Table 4. Two-tuple numbers transformed from index binary connection number vectors.
SiA1A2A3A4A5A6
S1(2.0, 0)(500, 0)(55.5, 0.5)(5.2, 0.5)(0.5, 0.1)(0.9, 0.1)
S2(2.5, 0)(540, 0)(35, 5)(4.7, 0.5)(0.3, 0.1)(0.5, 0.1)
S3(1.8, 0)(480, 0)(55, 5)(5.5, 0.5)(0.7, 0.1)(0.7, 0.1)
S4(2.2, 0)(520, 0)(40, 5)(5, 0.5)(0.5, 0.1)(0.5, 0.1)
Table 5. Normalized two-tuple numbers of all alternatives and target center indices.
Table 5. Normalized two-tuple numbers of all alternatives and target center indices.
NSiA1A2A3A4A5A6
NS1(0.2353, 0)(0.2451, 0)(0.2699, 0.0272)(0.2505, 0.2449)(0.2532, 0.2353)(0.2615, 0.1791)
NS2(0.2941, 0)(0.2647, 0)(0.2383, 0.3807)(0.2482, 0.2685)(0.2278, 0.3529)(0.2421, 0.2985)
NS3(0.2118, 0)(0.2353, 0)(0.2497, 0.2538)(0.2517, 0.2327)(0.2658, 0.1765)(0.2542, 0.2239)
NS4(0.2588, 0)(0.2549, 0)(0.2421, 0.3384)(0.2496, 0.2539)(0.2532, 0.2353)(0.2421, 0.2985)
NCP(0.2118, 0)(0.2647, 0)(0.2699, 0.0272)(0.2482, 0.2685)(0.2658, 0.1765)(0.2615, 0.1791)
NCN(0.2941, 0)(0.2353, 0)(0.2383, 0.3807)(0.2517, 0.2327)(0.2278, 0.3529)(0.2421, 0.2985)
Table 6. Comparison of the results of two decision methods.
Table 6. Comparison of the results of two decision methods.
Comprehensive Weighted K-L Distance MethodComprehensive Weighted Proximity Method
KL+RankTSRankICWPRank
S10.026510.696110.20231
S20.191040.073740.29284
S30.080420.433420.23542
S40.140230.219430.26953

Share and Cite

MDPI and ACS Style

Ma, J. Generalized Grey Target Decision Method for Mixed Attributes Based on Kullback-Leibler Distance. Entropy 2018, 20, 523. https://doi.org/10.3390/e20070523

AMA Style

Ma J. Generalized Grey Target Decision Method for Mixed Attributes Based on Kullback-Leibler Distance. Entropy. 2018; 20(7):523. https://doi.org/10.3390/e20070523

Chicago/Turabian Style

Ma, Jinshan. 2018. "Generalized Grey Target Decision Method for Mixed Attributes Based on Kullback-Leibler Distance" Entropy 20, no. 7: 523. https://doi.org/10.3390/e20070523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop