1. Introduction
Formulating uncertainty in solving multicriteria decision-making (MCDM) problems is one of the major concerns in MCDM study. In general, uncertainty originates from three primary sources: 1. the input of the decision-making process, 2. the uncertainty generated from the MCDM methods, and 3. the uncertainty in the results. The first uncertainty often liaises with the context of input of the decision analysis process, which in most cases involves humans as the decision-makers (DMs). DMs’ expectations, judgments, interpretations, different levels of knowledge/expertise, different levels of access to the sources of information, etc., are among the main reasons for the existence of uncertain inputs in a decision-making process. To deal with this, various methods and models have been developed. The fuzzy logic approach introduced by Zadeh [
1], the gray systems theory [
2], and rough set numbers [
3] are the most popular tools for dealing with such uncertainty in solving MCDM problems. The second uncertainty stems from the MCDM methods. When an MCDM problem deals with a large number of criteria and alternatives, the complexity of the problem increases since the analysis entangles multiple-feature data sets due to dealing with various criteria and multiple decision goals. Hence, uncertainty of information embedded in the decision matrices is inevitable due to the missing information during the analysis of alternatives, which MCDM methods cannot handle. This fundamental lack emanates from the MCDM algorithms’ architecture and the policies they employ to solve MCDM problems. The third uncertainty source is the dissimilarities in the outputs generated by MCDM methods for the same problem. Specifically, different results come from the different policies and philosophies MCDM methods utilize to solve MCDM problems. Solving this problem requires a global consensus on what MCDM method has superiority over other MCDM methods. It is indicated frequently in the MCDM literature that MCDM methods have no superiority over each other, and the ultimate assessment is evaluating the results in practice. Having said that, there exist methods and tools to investigate the results of the MCDM methods and validate the results to some extent, which mostly are grounded on comparing the MCDM results applied to the same case. The main contributions of this paper are fashioned around addressing the last two uncertainty sources, discussed earlier by proposing a new MCDM method using Shannon’s entropy and a tool for validating MCDM results.
Claude Shannon first introduced the statistical concept of entropy in the theory of communication and transmission of information in order to measure the average missing information in a random source [
4,
5]. Later, in 1949, Shannon and Weiner formulated the entropic content of information, which has been employed vastly in different branches of science since then [
6,
7]. Shternshis et al. [
8] define Shannon’s entropy as a measure for calculating randomness for symbolic dynamics symbolizing the average amount of uncertainty removed with the transmission of each symbol. Contreras-Reyes described Shannon’s entropy as a measure for quantifying aleatory aspects of random variables, representing an information quantity and value contained in a univariate/multivariate probability density function [
9]. Deng, who introduced Deng entropy, defined Shannon’s entropy as a measure for the calculation of the information volume of a system or a process and quantifying the expected information value contained in a message [
10]. Multiple measures of entropy were developed to apply to incomplete and complete probability distributions. The following equations show the different measures of the classic entropy introduced by Shannon and Weiner. Let us assume that
is a probability distribution. To measure its entropy, the classic Shannon’s entropy formula is as follows (see Equation (1)), where
denotes the entropy of
where
.
The Rényi entropies (see Equations (3) and (4)) constitute a family of information measures that generalize the well-known Shannon entropy, inheriting many of its properties, such as the Hartley entropy, the collision entropy, and the min-entropy [
11].
or
The smooth Rényi entropy developed by [
12] is represented in (Equation (5)), where
is the set of probability distributions.
The Havrda–Charvat entropy (see [
13,
14,
15]) is shown in (Equations (6) and (7)), where
denotes the real numbers:
or
Kapur’s entropies, from first kind to the fifth kind, are demonstrated in (Equations (8)–(12)), respectively [
16,
17,
18,
19,
20].
In addition to entropy, Dempster–Shafer evidence theory (DSET), complex evidence theory (CET), and recent work on using CET to convey quantum information of qubit states in Hilbert space for the expression of the uncertainty in knowledge [
21] are effective methods for dealing with uncertainty. DSET is a method that uses entropy functions and information volume to cope with uncertainty in a decision analysis process [
22]. As a generalized form of DSET, CET is another advanced method developed for uncertainty reasoning in knowledge-based systems and dealing with uncertainty in expert systems [
23]. Although DSET, as the root for the two latest methods, uses entropy functions, the concept of entropy is currently the most popular tool for measuring the uncertainty of information.
Along with the aforementioned forms of entropy, other entropy-based measures are developed and widely used in various branches of science, such as marketing, physics, statistics, computer science, AI, machine learning, search theory, economy, finance, and operations research.
MCDM is a subbranch of operations research that comprises MCDM problems and MCDM methods. An MCDM problem includes the available alternatives/options as the possible solutions for a decision-making problem and several criteria/attributes. Those criteria/attributes are defined to characterize the alternatives/options to showcase their potential for solving the decision-making problem. MCDM problems are often demonstrated as a matrix called the decision matrix, which is designed based on the alternatives, criteria, scores of alternatives against the criteria, and importance weights of the criteria. An MCDM method is a mathematical tool serving as the main tool to analyze an MCDM problem’s alternatives/options against the criteria/attributes through various algorithms to lead DM(s) to the optimal solutions [
24]. MCDM methods are categorized into two categories where multiattribute decision-making (MADM), or in general terms, MCDM methods, are developed to solve discrete problems. In contrast, the continuous problems are handled by multi-objective decision-making (MODM) techniques [
25]. MADM/MCDM methods are mainly classified into two classes following their performance in the decision matrix analysis. The first class includes the MCDM weighting methods, which are developed to measure the importance weight of criteria in reaching the decision’s goal(s), and the second class consists of the MCDM ranking methods, which employ the mentioned weights to evaluate the alternatives with different policies and philosophies. Shannon’s entropy is placed in the first class as an MCDM objective weighting method. The class of MCDM weighting methods diverges into two different subclasses based on human involvement as the decision-maker, the MCDM objective weighting methods, and MCDM subjective weighting methods. The MCDM subjective methods directly cover the decision-makers’ judgments, opinions, and expectations to extract the weight of importance of criteria with different mathematical models and algorithms. AHP, VIMM [
26], and WLD method [
27] could be addressed as the MCDM subjective weighting methods. The MCDM objective weighting methods, where Shannon’s entropy belongs, compute the importance weight of criteria with different approaches compared to the previous class. They employ mathematical algorithms to extract the weights from the decision matrix without interfering with decision-makers. Along with Shannon’s entropy, the CRITIC (The CRiteria Importance Through Intercriteria Correlation) method, developed by [
28], could be mentioned as another popular MCDM objective weighting method.
Shannon’s entropy is vastly utilized in MCDM applications in various fields to extract the weights of criteria. Here, some recent Shannon entropy applications are provided to showcase the popularity of the method in the MCDM methods’ applications. Some recent examples of the application of Shannon’s entropy in the supplier can be found in the following studies: for supplier evaluation [
29,
30,
31,
32,
33], for material selection problems [
34,
35,
36], for software evaluation [
37,
38,
39], and facility location selection [
40,
41,
42,
43]. There exists an MCDM method, called the alternative ranking process by alternatives’ stability scores (ARPASS), developed by [
25], where the entropy is used partly in one of the method’s extensions, called E-ARPASS, in which Shannon’s entropy is used to evaluate the stability of the alternatives instead of using the standard deviation; however, the impact of the entropy measurement is not significant enough to be interpreted that Shannon’s entropy is the central core of ARPASS’s functioning.
In this paper, we introduce a new MCDM method, called information values connected to the equilibrium points (IVEP), which evaluates decision alternatives of a complex decision-making problem by measuring the uncertainty of the alternative’s scores against the criteria using Shannon’s entropy to compute the information value each alternative generated through the decision-making process and a set of abstract points, called the equilibrium points. To measure the similarities between the IVEP algorithm’s outputs and other MCDM methods, a new statistical measure, called the Zakeri–Konstantas performance correlation coefficient, is proposed to evaluate the performance of the ranks generated by different MCDM methods in a comparison process in order to calculate the degree of the similarity.
The new method is introduced comprehensively and then applied to solve a material selection problem in the following sections. The method’s outputs are also compared with other MCDM methods’ results by the Zakeri–Konstantas performance correlation coefficient and the Hamming distance to determine the similarities of the obtained results. Hence, the remainder of the paper is organized as follows. In the second section, the IVEP method is introduced. The third section is devoted to applying the IVEP method to a real-world case. In the fourth section, the obtained results are comprehensively compared with other MCDM methods’ outputs. Conclusions and suggestions for future research are the final sections of the paper.
2. The IVEP Method
The IVEP multicriteria decision-making method was developed to solve complex decision-making problems constructed on many criteria and alternatives/options. The MCDM problems solve through analysis of the decision matrices, where each matrix is designed to evaluate the decision’s options/alternatives against a series of criteria that are, in fact, the common characteristics/attributes between the options/alternatives that describe them. These characteristics also demonstrate the conditions each alternative/option ought to possess to be a good choice for achieving the decision’s goal. Each decision matrix contains
number of data sets where
denotes the number of data sets that belong to the alternatives’ scores against the criteria,
stands for the number of data sets belonging to the criteria scores against the alternatives, and
stands for the data set that includes the weights of importance of criteria. Each data set provides information about the problem’s alternatives/options and criteria. As mentioned, Shannon’s entropy is a reliable method to analyze data sets that contain scores of criteria to determine their importance [
44]. Other information is provided by different data sets embedded in the decision matrix. The IVEP method analyzes the information provided for the decision’s options/alternatives to determine their priorities. The core process is similar to determining the importance weights of criteria, albeit it follows a different algorithm. The IVEP method is designed based on the information value and the equilibrium points. The equilibrium points are the abstract points in which the relatively balanced scores of the problem’s alternatives are located. The value of information is computed based on these points’ values using Shannon’s entropy. The IVEP algorithm is provided in the following steps, in which the rates of alternatives against the criteria are the algorithm’s inputs, and the ranks of alternatives are the outputs.
Step 1. Establishing the decision matrix, in which
denotes the scores of
th alternative against
th criterion, and
stands for the decision matrix (see Equation (13)).
Step 2. Normalizing the decision matrix and transforming it to a beneficial decision matrix, where all values are beneficial. The following equations show the normalized decision matrix, where
stands for the normalized decision matrix and
is the normalized score of
th alternative against
th criterion (see Equation (14)).
- -
For beneficial criteria, where higher values are favorable, the normalization process runs by (Equation (15)), where
stands for the scores of
th alternative against
th beneficial criterion.
- -
For unbeneficial criteria, where in contrast to beneficial criteria, lower values are expected, the normalization process is in accordance with (Equation (16)), where
denotes the scores of
th alternative against
th unbeneficial criterion.
Step 3. Determining the equilibrium points (EP) where the whole decision matrix values are set around them. The equilibrium points are located between the maximum values of scores of alternatives against
th criterion and their minimum values. The following equations demonstrate the process of determining EPs (Equations (17) and (18)).
or
Step 4. The fourth step is the computation of the separation measures from the equilibrium points by the calculation of the distance metrics between each alternative’s score against
th criterion and
th equilibrium point. The distance computation process runs by the following equation, where
is the classic
-dimensional Euclidean metric and
stands for the separation measure.
Step 5. In this step, the impact of the weights of criteria is calculated in a new decision matrix. The weights affect the separation measures that are shown as the weighted separation measure matrix containing the proportional abundance (
). The new matrix’s values are calculated as Equation (20), where
shows the weights of criteria.
Step 6. The calculation of the entropy of the data embedded in the weighted separation measure matrix runs in the sixth step of the IVEP algorithm. The entropy is calculated by the following equations (see Equation (21)), where
denotes the entropy of information alternatives generated. To have a better understanding of the process, to calculate the entropy, the alternatives and criteria are transposed, as displayed in
Figure 1.
Step 7. The computation of the information value (IV) each alternative has generated follows normalized entropy (see [
20,
45]), where
stands for the IV of
th alternative. In the process of measuring the weights of criteria in an MCDM problem, as indicated by [
46], The greater the value of the entropy, which corresponds to a particular criterion, implies the lesser the criterion’s weight and the lower the discriminative power. As shown in Equation (22), in contrast to the computation of the weights of criteria by Shannon’s entropy, the highest entropy means the higher IV, which results in a higher rank. The following equation also keeps the IV between 0 and 1.