1. Introduction
The grey target decision method has been studied by many scholars since it was proposed by Deng [
1]. Following the further research on decision-making, the indices of alternatives are extended from pure real values to mixed attribute values. Thus, this mixed attribute based grey target decision method is proposed to make it more applicable. The core of the grey target decision method is to obtain the target center distances and the alternatives to their target center, as the basis for decision-making. The certain number-based grey target decision method calculates the target center distance by distance method such as Euclidean distance and Mahalanobis distance [
2,
3]. The reported mixed attribute grey target decision method deals with target center distance in two ways: one is by distance including mainly Euclidean distance and other similar distances [
4,
5,
6,
7,
8,
9]; the other method is by vector-based distance, such as the generalized grey target decision method [
10,
11]. The generalized grey target decision method is different from the conventional one in that, during the calculation process, it obeys the principle of the conventional grey target decision method [
10,
11,
12,
13]. The tool for measuring the uncertainty of fuzzy numbers in mixed attribute based grey target decision method is needed to make decision-making more valuable in terms of its theoretical significance and practical application. Entropy is often used to measure uncertainty; thus, it is sure to be applied to the generalized grey target decision method involving fuzzy numbers. However, the Kullback-Leibler distance (K-L distance), originated from cross-entropy, has the ability to reflect the similarity of two discrete random distributions [
14]. Now, cross-entropy has been widely used in many fields: Ioannis and George applied it to intuitionistic fuzzy information pattern recognition [
15]. Li and Wu studied the alternative preference problem based on intuitionistic fuzzy cross-entropy [
16]. Xia and Xu carried out group decision-making which comprises intuitionistic fuzzy information [
17]. Smieja and Geiger studied the cluster problem confined by information using cross-entropy [
18]. Tang et al. proposed an optimization algorithm based on cross-entropy [
19].
The principle of this proposed approach goes as follows: all indices of alternatives are first converted into binary connection number vectors and also divided into those of deterministic terms and uncertain terms based on the previous method. Then the deterministic terms and uncertain terms of positive target centers and negative target centers under each attribute can be obtained. Next, the two-tuple (determinacy, uncertainty) numbers originated from index binary connection number vectors are deduced. Following that, the K-L distances of all alternatives to their positive and negative target centers are integrated by using the TOPSIS method: the final decision is based on the integrated value for which the bigger is the better.
2. Basic Theory
2.1. Fuzzy Number
Definition 1. Let R be a real domain; if denotes a fuzzy number, [xL, xU], [xL, xM, xU] and [xL, xM, xN, xU] are the expressions of called the interval number, triangular fuzzy number and trapezoidal fuzzy number, respectively, where xL, xM, xN and xU satisfy 0 < xL < xM < xN < xU ∈ R [20,21]. 2.2. Binary Connection Number
Definition 2. Let R be a real domain; A + Bi is called a binary connection number, where A represents the deterministic term, B is the uncertain term and i is a variable term unifying the determinacy and uncertainty of a fuzzy number and A, B ∈ R, i ∈ [−1, 1].
Definition 3. and are the mean value and deviation value of the n (
n ≥ 2)
parameters of respectively, then:is called a mean value-deviation value connection number. Where , , , and are calculated by use of Equations (2)–(5):
where is the jth parameter of the fuzzy number , is the mean value of the parameters, denotes the standard deviation of the parameters, ms is the maximum deviation of the parameters, is the minimum of and , xL and xU are the fuzzy number’s lower limits and upper limits, respectively [10,22]. Definition 4. The mutual interaction of the mean value and the deviation value (standard deviation or maximum deviation) of the binary connection number can be mapped to the determinacy-uncertainty space (D-U space). If represents the vector in D-U space, then i only denotes the signal of the uncertain term without representing the changeable value [20,21]. Figure 1 shows a
D-U space. The
U-axis represents the relative uncertainty measure, while the
D-axis denotes the relative deterministic measure. As seen from
Figure 1,
and
interact with each other and the space reflection is the vector
from
O to
E and the degree of interaction represents the modulus of vector
, denoted by
r.
2.3. Kullback-Leibler Distance
Definition 5. Kullback-Leibler distance [14,15]. Let and be two vectors, where , , then the K-L distance of and is given by Equation (6). exhibits the following characteristics: - (1)
;
- (2)
, when and only, when .
If then . So, the original K-L distance needs to be improved. The revised version of the K-L distance is as follows: Definition 6. Comprehensive weighted K-L distance. Let the symbols and , refer to the vectors of two-tuple (determinacy, uncertainty) numbers, where , are two-tuple (determinacy, uncertainty) numbers under the same attributes in S and E respectively. Denote the weight vector by , , and assume that for the two-tuples and the following normalization condition is satisfied: The comprehensive weighted K-L distance
can be calculated using the following equation:
Then the function has the following properties:
- (1)
;
- (2)
, when and only, when , or what amounts to the same, and ,;
- (3)
when or , then, by definition, .
The assertions in (1) and (2) can be proved as follows. We assume that
and
for
. In the following sequence of (in-) equalities we apply the convexity of the function
, or what amounts to the same the log-sum inequality, also called Gibb’s inequality:
(the function
, is convex)
Put , , and .
Then the inequality in property (1) implies
(apply once more the convexity of the function
)
The following inequality is true for
. Hence, we infer:
The final inequality follows from the normalization conditions on and . This shows the inequality in property (1). If , then all the previous inequalities are in fact equalities. This can only be true provided and for . This is kind of a converse to the Jensen inequality, or as it is presented here the log-sum inequality or Gibb’s inequality. Observe that the proofs of properties (1) and (2) can also be adapted to the situation where some of ’s or some of the ’s are zero. Essentially speaking the same proof works by summing over those for which or for which .
However, if the condition for
and
in Equation (8) is not satisfied, then
may occur, thus an improved version of which is given as follows.
In Equation (10), has the same characteristics as but it can solve the special problem that the condition in Equation (8) is not satisfied.
3. Generalized Grey Target Decision Method for Mixed Attributes Based on the K-L Distance
Let , and be an alternative set, attribute set and weight vector of index attributes respectively, then the index of alternative under attribute is .
3.1. Transformation of Index Values into Binary Connection Numbers
Different types of index values can be converted into binary A + Bi connection numbers regarded as vectors in D-U space using Equations (1)–(5). It is noteworthy that the converted binary connection number for real number is of the form A + 0i, which means that the deterministic term is the real number itself and the uncertain term is 0i. The transformed index vector can be expressed as .
3.2. Determination of the Target Centre Index Vectors
Having achieved the binary connection numbers converted from all index values, , which can also be denoted by the two-tuple number . The benefit type index set and cost type index set, are denoted by J+ and J−, respectively. Then the positive and negative target center index vectors of two-tuple (determinacy, uncertainty) denoted by and can be obtained using Equations (11) and (12).
The positive target center index of two-tuple (determinacy, uncertainty) is as follows:
The negative target center index of two-tuple (determinacy, uncertainty) is as follows:
Equation (11) indicates that the positive target center index of two-tuple (determinacy, uncertainty) number under attribute is such that the index vector corresponding to the maximum term and minimum term for benefit-type indices and that of the minimum term and minimum term is used for cost-type indices. Equation (12) represents the fact that the negative target center index the two-tuple (determinacy, uncertainty) number under attribute is such that the index vector corresponding to the minimum term and maximum term is used for benefit-type indices and that of the maximum term and maximum term is used for cost-type indices.
3.3. Normalization of All Alternative Indices
The index vectors of all alternatives
and target center index vectors
can be expressed as vectors of two-tuple (deterministic degree, uncertainty degree) numbers:
In Equation (13), and denote respectively the deterministic degree and uncertainty degree under the same attribute in normalized binary connection numbers. Then the vector of two-tuple (deterministic degree, uncertainty degree) number can be given as . It should be noted that a real number cannot be normalized in this step, or an error will occur when computing the uncertain term of real numbers as they are all zero under the same attribute.
The
and
in a two-tuple (deterministic degree, uncertainty degree) number
should be normalized further for they are incomparable under different attributes. The normalization equation is as follows:
In Equation (14), and are the normalized deterministic term and uncertainty term respectively in the two-tuple number.
3.4. Integration by TOPSIS Method
The closeness of comprehensive weighted K-L distance is used to judge the alternatives with the full consideration of the effects on each alternative on its positive and negative target centers. The TOPSIS method has been used extensively since it was proposed [
23]. Let
and
represent, respectively, the positive comprehensive weighted K-L distance and negative comprehensive weighted K-L distance, then the closeness of the comprehensive weighted K-L distance can be obtained by using Equation (15):
The decision-making could be based on for which, the larger the better.
3.5. Decision-Making Steps
The procedure of generalized grey target decision method based on K-L distance is shown in
Figure 2; the detailed steps therein are as follows:
- (1)
All indices of alternatives are converted into binary connection number vectors and comprised of two-tuple (determinacy, uncertainty) numbers by using Equations (1)–(5).
- (2)
The positive and negative target center indices of two-tuple (determinacy, uncertainty) number under all attributes are determined by using Equations (11) and (12).
- (3)
All two-tuple (determinacy, uncertainty) numbers are transformed into two-tuple (deterministic degree, uncertainty degree) numbers by using Equation (13) and they can also be normalized using the linear method given in Equation (14).
- (4)
The weights of all index attributes are calculated.
- (5)
The comprehensive weighted K-L distances of normalized two-tuple (deterministic degree, uncertainty degree) numbers between all alternatives and the target center are calculated by using Equation (9) or Equation (10); then the closeness of all alternatives can be obtained by use of the TOPSIS method and Equation (15).
- (6)
The decision-making is realized according to the closeness of each alternative for which, the larger the better.