Next Article in Journal
Exploring the Utility of Dutch Question Answering Datasets for Human Resource Contact Centres
Previous Article in Journal
Addressing Vehicle Sharing through Behavioral Analysis: A Solution to User Clustering Using Recency-Frequency-Monetary and Vehicle Relocation Based on Neighborhood Splits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Decision-Making Problems Using a Measure for Information Values Connected to the Equilibrium Points (IVEP) MCDM Method and Zakeri–Konstantas Performance Correlation Coefficient

Geneva School of Economics and Management, University of Geneva, 1211 Geneva, Switzerland
*
Author to whom correspondence should be addressed.
Information 2022, 13(11), 512; https://doi.org/10.3390/info13110512
Submission received: 11 September 2022 / Revised: 24 October 2022 / Accepted: 25 October 2022 / Published: 27 October 2022

Abstract

:
In this paper, a new multicriteria decision-making (MCDM) method, called a measure for information values connected to the equilibrium points (IVEP) method, and a new statistical measure for measuring the similarities of performances of MCDM algorithm outputs in a comparison process, called the Zakeri–Konstantas performance correlation coefficient, are introduced. The IVEP method uses Shannon’s entropy as the primary tool to measure the information embedded in the decision matrix in order to evaluate the decision’s options/alternatives for complex decision-making problems with a large number of criteria and alternatives. The second concept that drives the IVEP method is the equilibrium points, which signify the points in a vector space where scores for the decision’s options/alternatives are equilibrated. Instead of using linear functions to compute similarities between the data sets generated by the MCDM algorithms by the calculation of the distance using different methods, the Zakeri–Konstantas performance correlation coefficient focuses on the evaluation of the ranking performance of MCDM methods in an analytic comparison process in order to determine the degree of the similarities. The IVEP method is applied to a real-world decision-making problem—a material selection problem. A comparison analysis was performed on the results obtained from the IVEP, TOPSIS, WPM, COPRAS, and ARAS MCDM methods by the Zakeri–Konstantas performance correlation coefficient and the Hamming distance. The results of both measures revealed that the IVEP algorithm’s outputs have the highest similarity to TOPSIS outputs, among others. Nevertheless, the degree of the similarities is distinct due to the different approaches of the measures used.

1. Introduction

Formulating uncertainty in solving multicriteria decision-making (MCDM) problems is one of the major concerns in MCDM study. In general, uncertainty originates from three primary sources: 1. the input of the decision-making process, 2. the uncertainty generated from the MCDM methods, and 3. the uncertainty in the results. The first uncertainty often liaises with the context of input of the decision analysis process, which in most cases involves humans as the decision-makers (DMs). DMs’ expectations, judgments, interpretations, different levels of knowledge/expertise, different levels of access to the sources of information, etc., are among the main reasons for the existence of uncertain inputs in a decision-making process. To deal with this, various methods and models have been developed. The fuzzy logic approach introduced by Zadeh [1], the gray systems theory [2], and rough set numbers [3] are the most popular tools for dealing with such uncertainty in solving MCDM problems. The second uncertainty stems from the MCDM methods. When an MCDM problem deals with a large number of criteria and alternatives, the complexity of the problem increases since the analysis entangles multiple-feature data sets due to dealing with various criteria and multiple decision goals. Hence, uncertainty of information embedded in the decision matrices is inevitable due to the missing information during the analysis of alternatives, which MCDM methods cannot handle. This fundamental lack emanates from the MCDM algorithms’ architecture and the policies they employ to solve MCDM problems. The third uncertainty source is the dissimilarities in the outputs generated by MCDM methods for the same problem. Specifically, different results come from the different policies and philosophies MCDM methods utilize to solve MCDM problems. Solving this problem requires a global consensus on what MCDM method has superiority over other MCDM methods. It is indicated frequently in the MCDM literature that MCDM methods have no superiority over each other, and the ultimate assessment is evaluating the results in practice. Having said that, there exist methods and tools to investigate the results of the MCDM methods and validate the results to some extent, which mostly are grounded on comparing the MCDM results applied to the same case. The main contributions of this paper are fashioned around addressing the last two uncertainty sources, discussed earlier by proposing a new MCDM method using Shannon’s entropy and a tool for validating MCDM results.
Claude Shannon first introduced the statistical concept of entropy in the theory of communication and transmission of information in order to measure the average missing information in a random source [4,5]. Later, in 1949, Shannon and Weiner formulated the entropic content of information, which has been employed vastly in different branches of science since then [6,7]. Shternshis et al. [8] define Shannon’s entropy as a measure for calculating randomness for symbolic dynamics symbolizing the average amount of uncertainty removed with the transmission of each symbol. Contreras-Reyes described Shannon’s entropy as a measure for quantifying aleatory aspects of random variables, representing an information quantity and value contained in a univariate/multivariate probability density function [9]. Deng, who introduced Deng entropy, defined Shannon’s entropy as a measure for the calculation of the information volume of a system or a process and quantifying the expected information value contained in a message [10]. Multiple measures of entropy were developed to apply to incomplete and complete probability distributions. The following equations show the different measures of the classic entropy introduced by Shannon and Weiner. Let us assume that p i = { p 1 , ,   p m } is a probability distribution. To measure its entropy, the classic Shannon’s entropy formula is as follows (see Equation (1)), where E i ( P ) denotes the entropy of p i where i = { 1 , , m } .
E i ( P ) = i = 1 m p i log 2 p i ,   p i = { p 1 , ,   p m } ,   i = { 1 , , m } ;
E i ( P ) = p i log 2 p i d x ,   p i = { p 1 , ,   p m } ,   i = { 1 , , m } ;  
The Rényi entropies (see Equations (3) and (4)) constitute a family of information measures that generalize the well-known Shannon entropy, inheriting many of its properties, such as the Hartley entropy, the collision entropy, and the min-entropy [11].
E i ( P ) = 1 1 α log 2 i = 1 m p i α ( i = 1 m p i ) 1 ,     α [ 0 , ) ;  
or
E α ε ( P i ) = 1 1 α log 2 [ [ P i ] α d x ] ;  
The smooth Rényi entropy developed by [12] is represented in (Equation (5)), where ε ( P ) = { Q : δ ( P , Q ) ε } is the set of probability distributions.
E α ε ( P i ) = 1 1 α inf Q ε ( P ) log 2 ( m i i = 1 Q ( m ) α ) ,   ε 0 ,   α [ 0 , ) ;  
The Havrda–Charvat entropy (see [13,14,15]) is shown in (Equations (6) and (7)), where denotes the real numbers:
E β ε ( P i ) = ( 2 1 β 1 ) 1 i = 1 m p i β 1 ,   β 1 ;  
or
E β ε ( P i ) = 1 1 β [ [ P i ] β d x 1 ] ,   β 1 ,   β ;  
Kapur’s entropies, from first kind to the fifth kind, are demonstrated in (Equations (8)–(12)), respectively [16,17,18,19,20].
E i ( P i ) = i = 1 m p i log 2 p i + 1 α i = 1 m ( 1 + α p i ) log 2 1 + α p i ,   α 1 ;
E i ( P i ) = 1 1 α log 2 ( i = 1 m p i β ) 1 i = 1 m p i α + β 1 ,   α 1 ,   β > 0 ,   α + β 1 > 0 ;
E i ( P i ) = i = 1 m p i log 2 p i + 1 α 2 i = 1 m ( 1 + α p i ) log 2 ( 1 + α p i ) ,   α > 0 ;
E i ( P i ) = 1 ( i = 1 m p i α ) 1 α 1 ,   α > 0 ;  
E i ( P i ) = ( i = 1 m p i α ) 1 α 1 ,   0 < α < 1 ;
In addition to entropy, Dempster–Shafer evidence theory (DSET), complex evidence theory (CET), and recent work on using CET to convey quantum information of qubit states in Hilbert space for the expression of the uncertainty in knowledge [21] are effective methods for dealing with uncertainty. DSET is a method that uses entropy functions and information volume to cope with uncertainty in a decision analysis process [22]. As a generalized form of DSET, CET is another advanced method developed for uncertainty reasoning in knowledge-based systems and dealing with uncertainty in expert systems [23]. Although DSET, as the root for the two latest methods, uses entropy functions, the concept of entropy is currently the most popular tool for measuring the uncertainty of information.
Along with the aforementioned forms of entropy, other entropy-based measures are developed and widely used in various branches of science, such as marketing, physics, statistics, computer science, AI, machine learning, search theory, economy, finance, and operations research.
MCDM is a subbranch of operations research that comprises MCDM problems and MCDM methods. An MCDM problem includes the available alternatives/options as the possible solutions for a decision-making problem and several criteria/attributes. Those criteria/attributes are defined to characterize the alternatives/options to showcase their potential for solving the decision-making problem. MCDM problems are often demonstrated as a matrix called the decision matrix, which is designed based on the alternatives, criteria, scores of alternatives against the criteria, and importance weights of the criteria. An MCDM method is a mathematical tool serving as the main tool to analyze an MCDM problem’s alternatives/options against the criteria/attributes through various algorithms to lead DM(s) to the optimal solutions [24]. MCDM methods are categorized into two categories where multiattribute decision-making (MADM), or in general terms, MCDM methods, are developed to solve discrete problems. In contrast, the continuous problems are handled by multi-objective decision-making (MODM) techniques [25]. MADM/MCDM methods are mainly classified into two classes following their performance in the decision matrix analysis. The first class includes the MCDM weighting methods, which are developed to measure the importance weight of criteria in reaching the decision’s goal(s), and the second class consists of the MCDM ranking methods, which employ the mentioned weights to evaluate the alternatives with different policies and philosophies. Shannon’s entropy is placed in the first class as an MCDM objective weighting method. The class of MCDM weighting methods diverges into two different subclasses based on human involvement as the decision-maker, the MCDM objective weighting methods, and MCDM subjective weighting methods. The MCDM subjective methods directly cover the decision-makers’ judgments, opinions, and expectations to extract the weight of importance of criteria with different mathematical models and algorithms. AHP, VIMM [26], and WLD method [27] could be addressed as the MCDM subjective weighting methods. The MCDM objective weighting methods, where Shannon’s entropy belongs, compute the importance weight of criteria with different approaches compared to the previous class. They employ mathematical algorithms to extract the weights from the decision matrix without interfering with decision-makers. Along with Shannon’s entropy, the CRITIC (The CRiteria Importance Through Intercriteria Correlation) method, developed by [28], could be mentioned as another popular MCDM objective weighting method.
Shannon’s entropy is vastly utilized in MCDM applications in various fields to extract the weights of criteria. Here, some recent Shannon entropy applications are provided to showcase the popularity of the method in the MCDM methods’ applications. Some recent examples of the application of Shannon’s entropy in the supplier can be found in the following studies: for supplier evaluation [29,30,31,32,33], for material selection problems [34,35,36], for software evaluation [37,38,39], and facility location selection [40,41,42,43]. There exists an MCDM method, called the alternative ranking process by alternatives’ stability scores (ARPASS), developed by [25], where the entropy is used partly in one of the method’s extensions, called E-ARPASS, in which Shannon’s entropy is used to evaluate the stability of the alternatives instead of using the standard deviation; however, the impact of the entropy measurement is not significant enough to be interpreted that Shannon’s entropy is the central core of ARPASS’s functioning.
In this paper, we introduce a new MCDM method, called information values connected to the equilibrium points (IVEP), which evaluates decision alternatives of a complex decision-making problem by measuring the uncertainty of the alternative’s scores against the criteria using Shannon’s entropy to compute the information value each alternative generated through the decision-making process and a set of abstract points, called the equilibrium points. To measure the similarities between the IVEP algorithm’s outputs and other MCDM methods, a new statistical measure, called the Zakeri–Konstantas performance correlation coefficient, is proposed to evaluate the performance of the ranks generated by different MCDM methods in a comparison process in order to calculate the degree of the similarity.
The new method is introduced comprehensively and then applied to solve a material selection problem in the following sections. The method’s outputs are also compared with other MCDM methods’ results by the Zakeri–Konstantas performance correlation coefficient and the Hamming distance to determine the similarities of the obtained results. Hence, the remainder of the paper is organized as follows. In the second section, the IVEP method is introduced. The third section is devoted to applying the IVEP method to a real-world case. In the fourth section, the obtained results are comprehensively compared with other MCDM methods’ outputs. Conclusions and suggestions for future research are the final sections of the paper.

2. The IVEP Method

The IVEP multicriteria decision-making method was developed to solve complex decision-making problems constructed on many criteria and alternatives/options. The MCDM problems solve through analysis of the decision matrices, where each matrix is designed to evaluate the decision’s options/alternatives against a series of criteria that are, in fact, the common characteristics/attributes between the options/alternatives that describe them. These characteristics also demonstrate the conditions each alternative/option ought to possess to be a good choice for achieving the decision’s goal. Each decision matrix contains m + n + 1 number of data sets where m denotes the number of data sets that belong to the alternatives’ scores against the criteria, n stands for the number of data sets belonging to the criteria scores against the alternatives, and 1 stands for the data set that includes the weights of importance of criteria. Each data set provides information about the problem’s alternatives/options and criteria. As mentioned, Shannon’s entropy is a reliable method to analyze data sets that contain scores of criteria to determine their importance [44]. Other information is provided by different data sets embedded in the decision matrix. The IVEP method analyzes the information provided for the decision’s options/alternatives to determine their priorities. The core process is similar to determining the importance weights of criteria, albeit it follows a different algorithm. The IVEP method is designed based on the information value and the equilibrium points. The equilibrium points are the abstract points in which the relatively balanced scores of the problem’s alternatives are located. The value of information is computed based on these points’ values using Shannon’s entropy. The IVEP algorithm is provided in the following steps, in which the rates of alternatives against the criteria are the algorithm’s inputs, and the ranks of alternatives are the outputs.
Step 1. Establishing the decision matrix, in which s i j denotes the scores of i th alternative against j th criterion, and X i j stands for the decision matrix (see Equation (13)).
X i j = s i j ,   i = { 1 , , m } ,   j = { 1 , , n } ;
Step 2. Normalizing the decision matrix and transforming it to a beneficial decision matrix, where all values are beneficial. The following equations show the normalized decision matrix, where X i j stands for the normalized decision matrix and r i j is the normalized score of i th alternative against j th criterion (see Equation (14)).
X i j = r i j ;
-
For beneficial criteria, where higher values are favorable, the normalization process runs by (Equation (15)), where s i j + stands for the scores of i th alternative against j th beneficial criterion.
r i j = s i j + max 1 j n s i j + ;
-
For unbeneficial criteria, where in contrast to beneficial criteria, lower values are expected, the normalization process is in accordance with (Equation (16)), where s i j denotes the scores of i th alternative against j th unbeneficial criterion.
r i j = s i j   min 1 j n s i j ;
Step 3. Determining the equilibrium points (EP) where the whole decision matrix values are set around them. The equilibrium points are located between the maximum values of scores of alternatives against j th criterion and their minimum values. The following equations demonstrate the process of determining EPs (Equations (17) and (18)).
ϖ j = 1 min 1 j n r i j 2 + min 1 j n r i j ,   j = { 1 , , n } ;
or
ϖ j = 1 1 min 1 j n r i j 2 ,   j = { 1 , , n } ;  
Step 4. The fourth step is the computation of the separation measures from the equilibrium points by the calculation of the distance metrics between each alternative’s score against j th criterion and j th equilibrium point. The distance computation process runs by the following equation, where x = 2 is the classic n -dimensional Euclidean metric and θ i j stands for the separation measure.
θ i j = ( ϖ j r i j ) x 1 x ,   x = 2 ,   j = { 1 , , n } ;  
Step 5. In this step, the impact of the weights of criteria is calculated in a new decision matrix. The weights affect the separation measures that are shown as the weighted separation measure matrix containing the proportional abundance ( θ i j * ). The new matrix’s values are calculated as Equation (20), where w j shows the weights of criteria.
θ i j * = w j θ i j ;
Step 6. The calculation of the entropy of the data embedded in the weighted separation measure matrix runs in the sixth step of the IVEP algorithm. The entropy is calculated by the following equations (see Equation (21)), where E i denotes the entropy of information alternatives generated. To have a better understanding of the process, to calculate the entropy, the alternatives and criteria are transposed, as displayed in Figure 1.
E i = 1 log 2 n i = 1 m θ i j * log 2 θ i j * ,   i = { 1 , , m } ,   j = { 1 , , n } ;  
Step 7. The computation of the information value (IV) each alternative has generated follows normalized entropy (see [20,45]), where ϕ i stands for the IV of i th alternative. In the process of measuring the weights of criteria in an MCDM problem, as indicated by [46], The greater the value of the entropy, which corresponds to a particular criterion, implies the lesser the criterion’s weight and the lower the discriminative power. As shown in Equation (22), in contrast to the computation of the weights of criteria by Shannon’s entropy, the highest entropy means the higher IV, which results in a higher rank. The following equation also keeps the IV between 0 and 1.
ϕ i = E i max 1 i m E i min 1 i m E i

3. Real-World Application and Results

In this section, the IVEP method is applied to a complex decision-making problem. Engineering design concentrates on designing high-performance products with the lowest cost and environmental sensitivity, often constricted by the materials and the selection approaches of a suitable material that meets the requirements. Due to a variety of goals and their different properties, a material selection problem often deals with many alternatives and a large number of criteria to characterize the goals and the needs. Therefore, the material selection problem could be converted into an MCDM problem.
In this section, the new method is applied to a complex material selection problem provided in Rathod and Kanzaria [47], which deals with a large number of options and criteria. The data sets, including the criteria weights and alternatives scores against the criteria, are adopted from Rathod and Kanzaria’s work.

The Material Selection Problem

This decision-making problem concentrates on selecting the most suitable phase-change material for solar energy storage. The problem is constructed on nine materials, including calcium chloride hexahydrate, stearic acid, p116, RT 60, paraffin wax RT 30, n-docosane, n-octadecane, n-nonadecane, and n-eicosane. The six criteria that are defined for the evaluation are the latent heat J/kg, density kg/m3, specific heat kJ/kg K, thermal conductivity W/m K, and cost. Except for the cost as an unbeneficial criterion, the rest are beneficial criteria where higher values are expected. The corresponding decision matrix of the problem is displayed in Table 1, where w j stands for the weights of criteria. The results of the IVEM method application are illustrated in Table 2 and Table 3, in which EP ( ϖ j ) is computed in accordance with Equations (16) and (17) and the EP compared to other alternatives’ data sets is displayed in Figure 2, where the calcium chloride hexahydrate, stearic acid, p116, RT 60, paraffin wax RT 30, n-docosane, n-octadecane, n-nonadecane, and n-eicosane are shown as A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 7 , A 8 , and A 9 . The distance between EP and each alternative’s scores is also illustrated in Figure 3. In the process, the computation of the entropy of each alternative’s data set ( E i ) follows Equation (21), and the information value is computed based on Equation (22).

4. Discussion

This section compares the results obtained from the IVEP method’s application with other MCDM methods to validate the results. In theory, the dominance of an MCDM over another MCDM cannot be concluded; hence, when multiple MCDM methods come to a relative agreement on the ranks of alternatives, the common rank could potentially be reliable. It is the same for determining the best option. We compared the IVEP method’s application with TOPSIS (see [48]), WPM (see [49]), COPRAS (see [50]), and ARAS (see [51]) methods. The difference between generated outputs of the aforementioned MCDM method is pictured in Figure 4 and Table 4.
As shown in Figure 4, there is no definite compromise in rankings generated by the MCDM method. Hence, by measuring the similarity between the rankings, the MCDM method that has more similarities to other MCDM methods could be determined as a reliable MCDM method for evaluating the decision’s options and providing the optimum solution. This paper proposes a new statistical measure, called the Zakeri–Konstantas performance correlation coefficient, to measure the similarity between the generated rankings. There exist other tools to analyze MCDM method results, including sensitivity analysis [52,53] and its drawbacks in the evaluation of MCDM method results (see [54]), Spearman’s rank correlation (see [55]), Pearson’s product-moment correlation coefficient, and Kendall’s coefficient of concordance (see [56]).

4.1. Zakeri–Konstantas Performance Correlation Coefficient

Validation of the MCDM methods through comparing their results with other MCDM methods or evaluation of their results against popular MCDM methods such as TOPSIS and AHP (see [57]) is the traditional way to prove that the method’s results are relatively reliable theoretically. In this section, a new statistical/similarity measure, called the Zakeri–Konstantas performance correlation coefficient, is represented to show how the similarities of the data sets generated by the MCDM algorithms are computed. In data science, the similarity measure is a way of measuring similarities or the relation between two populations or evaluating a population’s change over time. Most measures keep similarities between 0 and 1: when the similarity’s value tends toward 1, it shows a higher similarity between two data sets or two populations, while when it tends toward 0, it means otherwise. Along with Spearman’s rank correlation, Pearson’s product-moment correlation coefficient, and Kendall’s coefficient of concordance, there exist other popular measures for measuring similarities between data, such as the Hamming distance (see [58,59]), Manhattan distance (see [60,61]), Canberra distance (see [62,63,64]), Chebyshev distance (see [65,66]), Minkowski distance (see [67,68,69]), cosine distance (see [70]), Mahalanobis distance ([71,72]), chi-squared distance (see [73,74]), Jensen–Shannon distance (see [75,76]), Levenshtein distance ([77,78]), Jaccard/Tanimoto distance (see [79,80]), and Sørensen–Dice (see [81]). In order to have a comprehensive perspective of the different similarity and distance measurement methods, see Choi et al. [82].
The Zakeri–Konstantas performance correlation coefficient computes the similarities of two rankings. However, it does not just compare the similarities of the ranks or measure the distance of data sets with different approaches. The core concept of the computation of the new coefficient is calculating the performance of each decision option based on its corresponding rank generated by the MCDM algorithm. Let us suppose that F = { l , h } is the set of two MCDM methods, then their similarity by the Zakeri–Konstantas performance correlation coefficient computes as follows (see Equations (23)–(25)), where Ζ Κ stands for the Zakeri–Konstantas performance correlation coefficient, expresses the natural numbers, and R i l and R i h denote the rank of i th alternative generated by l th MCDM algorithm and the rank of i th alternative generated by h th MCDM algorithm, respectively.
Ζ Κ ( l : h ) = m 1 i = 1 m min 1 i m ( ( m + 1 ) R i l i = 1 m i × m R i l : ( m + 1 ) R i h i = 1 m i × m R i h ) max 1 i m ( ( m + 1 ) R i l i = 1 m i × m R i l : ( m + 1 ) R i h i = 1 m i × m R i h ) × 100 ,   F = { l , h } ;  
then
Ζ Κ ( l : h ) = 100 m i = 1 m min 1 i m ( m 2 + m m R i l R i l i = 1 m i ; m 2 + m m R i h R i h i = 1 m i ) max 1 i m ( m 2 + m m R i l R i l i = 1 m i ; m 2 + m m R i h R i h i = 1 m i ) ;
where
Ζ Κ ( l : h ) = { 0 ,   i = 1 m ( R i l R i h ) 2 0.5 = m 2 2 ,   m = 2 κ ,   κ N ,   κ 0 , 1   0 ,   i = 1 m ( R i l R i h ) 2 0.5 = m 2 1 2 ,   m = 2 κ + 1 ,   κ N ,   κ 0   ;
The Zakeri–Konstantas performance correlation coefficient is designed on two main bases: 1. the significance of each option (Equation (26)); and 2. the performance of each option in each rank (Equation (27)).
U i F = ( ( m + 1 ) R i F ) i = 1 m i 1 ,   i = { 1 , , m } ;
Γ i F = R i F ( ( m + 1 ) R i F ) i = 1 m i 1 ,   i = { 1 , , m } ;
There are constant values for the significance and performance of the options in the Zakeri–Konstantas performance correlation coefficient. The values are shown in Table 5 and Table 6.
The similarity of the performance between the IVEP method algorithm’s outputs and other MCDM algorithm outputs computed by the Zakeri–Konstantas performance correlation coefficient is illustrated in Figure 5, where TOPSIS showed the most similar performance to the IVEP method compared to other methods. The performance similarities between methods and the total performance similarities are shown in Table 7, and their analytical comparisons are illustrated in Figure 6. The total performance is calculated in accordance with Equation (28).
Ζ Κ ( l : h ) t = Ζ Κ ( l : h ) ( m 1 ) 1 ,   i = { 1 , , m } ;
Zakeri–Konstantas performance correlation coefficient has asymmetrical distribution. The new coefficient’s domain is [ 0 , 1 ] , in which the lower bound draws the situation where the ranks are utterly opposite (Equation (25)), and the upper bound shows the situation where two MCDM methods generated the same ranking. However, for a better interpretation, the outputs have been converted to percentages to show the range of similarities, from 0 to 100%. As shown in Table 6, by increasing the rankings’ size, the significance of options/alternatives, and the constant values of the performance increase, the size does not affect the coefficient’s outputs.

4.2. Comparing Outputs with the Hamming Distance

Without considering each MCDM performance in generating ranks, this section focuses on the Hamming distance to measure the similarities between the data sets that each MCDM algorithm generated. In data science, the Hamming distance is often employed to quantify the difference between two-bit strings of the same dimensions [83]. Let us assume that Q i and V i are two data sets generated by two different MCDM algorithms. To quantify the difference by the Hamming distance, the following equation (Equation (29)) is employed, where H ( Q , V ) denotes the Hamming distance of Q i and V i . In order to determine the total similarity of the ranks, we developed (Equation (30)). The results of the Hamming distance application and the total similarity are exhibited in Table 8. The analytical comparisons between the Hamming distance and the Zakeri–Konstantas performance correlation coefficient is portrayed in Figure 7, where TOPSIS and IVEP, as the Hamming distance computed, have a similarity of 90%. In comparison, their performance similarity computed by the Zakeri–Konstantas performance correlation coefficient is 70%, which shows the difference between calculating the similarity with the linear equations and the similarity based on the algorithms’ performances. The analysis of the total similarities is displayed in Figure 8.
H ( Q i , V i ) = i = 1 m | Δ R i Q , R i V | ,   i = { 1 , . m } ,   Q i = { R 1 Q , , R m Q } ,   V i = { R 1 V , , R m V } ;
H t F = 100 m 2 i = 1 m m | Δ R i Q , R i V | ,   i = { 1 , . m } ,   Q , V F ;

5. Conclusions and Future Research

As Robinson [84] stated, entropy is best interpreted as a measure of uncertainty. In the decision-making analysis process, Shannon’s entropy is utilized as one of the most popular MCDM weighting methods to measure the entropy of the data contained in a decision matrix to measure the criteria weights. In contrast, the information values connected to the equilibrium points (IVEP) method use Shannon’s entropy to measure the uncertainty of information each alternative generates through a decision-making process in order to analyze and prioritize them. In addition to the IVEP method, this paper introduced a new statistical measure to measure the similarity of two ranking data sets, called the Zakeri–Konstantas performance correlation coefficient, which focuses on the performance of the data set generators in an analytical comparison.
This paper showcased the applicability of the new MCDM method. The results add another advantage to the applicability of the IVEP method, where its results based on the Hamming distance were 90% similar to the TOPSIS method. Furthermore, according to the Zakeri–Konstantas performance correlation coefficient, the IVEP method performance has a 41% degree of similarity to other MCDM methods, which is close to the TOPSIS by 11%. It is worth mentioning that the highest degree of similarity belongs to the WPM method, with 69.608%. As mentioned in the paper, in theory, measuring the dominance of an MCDM method over other MCDM methods is not possible, and it ought to be addressed after the implementation of the results, yet measuring similarities shows the reliability of a method to some extent. TOPSIS is globally considered a reliable MCDM method, and both methods have been applied to dozens of cases to solve decision-making problems. By using linear functions, the Hamming distance showed that the IVEP method has profound similarity to TOPSIS in generating the outputs. Based on the similarity degree presented by the Zakeri–Konstantas performance correlation coefficient, the IVEP method’s performance in generating ranks compared with other MCDM methods is close to TOPSIS, as mentioned earlier. Moreover, the IVEP method’s basis is measuring uncertainty generated by the alternatives through the decision-making process, and the source of information is the decision matrix. It makes it ideal for solving complex problems that deal with a large number of criteria with different contexts, beneficial and unbeneficial, and large numbers of alternatives.
The first suggestion for future research is related to the latter discussion regarding measuring the reliability of the IVEP method. We suggest applying the IVEP method for solving different MCDM problems and evaluating results using various similarity measures mentioned in this paper. The second suggestion for future research is to compare those measures, specifically the weighted Spearman [85] and WS coefficient of rankings similarity [86], with the Zakeri–Konstantas performance correlation coefficient, structurally and in terms of results. The study of the Zakeri–Konstantas performance correlation coefficient’ properties and testing it on different rankings with different length is another suggestion for future research. In this paper, we named various types of entropy, such as Deng entropy, Rényi entropies, smooth Rényi entropy, Havrda–Charvat entropy, and Kapur’s entropy family. The next interesting suggestion is to utilize these entropies to measure the information values in the IVEP method and compare the results. Another suggestion for future work is using the IVEP method to solve complex decision-making problems under uncertain environments. We are developing a new extension of the IVEP method for solving decision-making problems with multiple layers of criteria; hence, the last suggestion for future research would be to extend the method to solve these types of problems and compare the results with AHP extensions.

Author Contributions

Conceptualization, S.Z.; methodology, S.Z.; validation, S.Z.; formal analysis, S.Z. and D.K.; investigation, S.Z. and D.K.; resources, D.K.; data curation, S.Z. and D.K.; writing—original draft preparation, S.Z.; writing—review and editing, S.Z. and D.K.; visualization, S.Z. and D.K.; supervision, D.K.; project administration, S.Z. and D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, S.; Forrest, J.; Vallee, R. Emergence and development of grey systems theory. Kybernetes 2009, 38, 1246–1256. [Google Scholar] [CrossRef]
  3. Pawlak, Z. Rough sets. Int. J. Parallel Program. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  4. Lesne, A. Shannon entropy: A rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics. Math. Struct. Comput. Sci. 2014, 24, e240311. [Google Scholar] [CrossRef] [Green Version]
  5. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  6. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Urbana, IL, USA, 1949. [Google Scholar]
  7. Sherwin, W.B.; i Fornells, N.P. The Introduction of Entropy and Information Methods to Ecology by Ramon Margalef. Entropy 2019, 21, 794. [Google Scholar] [CrossRef] [Green Version]
  8. Shternshis, A.; Mazzarisi, P.; Marmi, S. Measuring market efficiency: The Shannon entropy of high-frequency financial time series. Chaos Solitons Fractals 2022, 162, 112403. [Google Scholar] [CrossRef]
  9. Contreras-Reyes, J.E. Rényi entropy and divergence for VARFIMA processes based on characteristic and impulse response functions. Chaos Solitons Fractals 2022, 160, 112268. [Google Scholar] [CrossRef]
  10. Deng, Y. Deng entropy. Chaos Solitons Fractals 2016, 91, 549–553. [Google Scholar] [CrossRef]
  11. Müller-Lennert, M.; Dupuis, F.; Szehr, O.; Fehr, S.; Tomamichel, M. On quantum Rényi entropies: A new generalization and some properties. J. Math. Phys. 2013, 54, 122203. [Google Scholar] [CrossRef]
  12. Renner, R.; Wolf, S. Smooth renyi entropy and applications. In Proceedings of the International Symposium on Information Theory, 2004. ISIT 2004. Proceedings, Chicago, IL, USA, 27 June–02 July 2004; IEEE: Piscataway, NJ, USA, 2004; p. 233. [Google Scholar] [CrossRef]
  13. Majernık, V. The Shannon, Rényi and Havrda-Charvat entropy functionals for the infinite well and quantum oscillator. 2013. Available online: https://www.researchgate.net/profile/Vladimir-Majernik/publication/265000257_The_Shannon_Renyi_and_Havrda-Charvat_entropy_functionals_for_the_infinite_well_and_quantum_oscillator/links/53fb508f0cf27c365cf09b52/The-Shannon-Renyi-and-Havrda-Charvat-entropy-functionals-for-the-infinite-well-and-quantum-oscillator.pdf (accessed on 6 September 2022).
  14. Kumar, S.; Ram, G. A Generalization of the Havrda-Charvat and Tsallis Entropy and Its Axiomatic Characterization. Abstr. Appl. Anal. 2014, 2014, 1–8. [Google Scholar] [CrossRef] [Green Version]
  15. Tustison, N.J.; Awate, S.P.; Song, G.; Cook, T.S.; Gee, J.C. Point Set Registration Using Havrda–Charvat–Tsallis Entropy Measures. IEEE Trans. Med. Imaging 2010, 30, 451–460. [Google Scholar] [CrossRef] [PubMed]
  16. Kapur, J.N. Generalized entropy of order α and type β. Math. Semin. 1967, 4, 78–82. [Google Scholar]
  17. Kapur, J.N. Twenty-five years of maximum-entropy principle. J. Math. Phys. Sci. 1983, 17, 103–156. [Google Scholar]
  18. Kapur, J.N. Four families of measures of entropy. Ind. J. Pure Appl. Math. 1986, 17, 429–449. [Google Scholar]
  19. Kapur, J.N.; Bector, C.R.; Bhatia, B.L. On Entropy and Directed Divergence of Order α and Prob; University of California Press: Oakland, CA, USA, 1961; p. 546. [Google Scholar]
  20. Kumar, U.; Kumar, V.; Kapur, J.N. Normalized Measures of Entropy. Int. J. Gen. Syst. 1986, 12, 55–69. [Google Scholar] [CrossRef]
  21. Xiao, F.; Pedrycz, W. Negation of the Quantum Mass Function for Multisource Quantum Information Fusion with Its Application to Pattern Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 10. [Google Scholar] [CrossRef]
  22. Xiao, F.; Cao, Z.; Lin, C.-T. A Complex Weighted Discounting Multisource Information Fusion With Its Application in Pattern Classification. IEEE Trans. Knowl. Data Eng. 2022, 1–16. [Google Scholar] [CrossRef]
  23. Xiao, F.; Wen, J.; Pedrycz, W. Generalized Divergence-based Decision Making Method with an Application to Pattern Classification. IEEE Trans. Knowl. Data Eng. 2022, 10. [Google Scholar] [CrossRef]
  24. Zakeri, S.; Cheikhrouhou, N.; Konstantas, D.; Barabadi, F.S. A Grey Approach for the Computation of Interactions Between Two Groups of Irrelevant Variables of Decision Matrices. In Multiple Criteria Decision Making; Springer: Singapore, 2022; pp. 193–222. [Google Scholar] [CrossRef]
  25. Zakeri, S.; Yang, Y.; Konstantas, D. A Supplier Selection Model Using Alternative Ranking Process by Alternatives’ Stability Scores and the Grey Equilibrium Product. Processes 2022, 10, 917. [Google Scholar] [CrossRef]
  26. Zakeri, S.; Ecer, F.; Konstantas, D.; Cheikhrouhou, N. The vital-immaterial-mediocre multi-criteria decision-making method. Kybernetes 2021. [Google Scholar] [CrossRef]
  27. Zakeri, S.; Chatterjee, P.; Cheikhrouhou, N.; Konstantas, D. Ranking based on optimal points and win-loss-draw multi-criteria decision-making with application to supplier evaluation problem. Expert Syst. Appl. 2022, 191, 116258. [Google Scholar] [CrossRef]
  28. Diakoulaki, D.; Mavrotas, G.; Papayannakis, L. Determining objective weights in multiple cri-teria problems: The critic method. Comput. Oper. Res. 1995, 22, 763–770. [Google Scholar] [CrossRef]
  29. Sarfaraz, A.H.; Yazdi, A.K.; Wanke, P.; Nezhad, E.A.; Hosseini, R.S. A novel hierarchical fuzzy inference system for supplier selection and performance improvement in the oil & gas industry. J. Decis. Syst. 2022, 1–28. [Google Scholar] [CrossRef]
  30. Shang, Z.; Yang, X.; Barnes, D.; Wu, C. Supplier selection in sustainable supply chains: Using the integrated BWM, fuzzy Shannon entropy, and fuzzy MULTIMOORA methods. Expert Syst. Appl. 2022, 195, 116567. [Google Scholar] [CrossRef]
  31. Zhang, J.; Li, L.; Zhang, J.; Chen, L.; Chen, G. Private-label sustainable supplier selection using a fuzzy entropy-VIKOR-based approach. Complex Intell. Syst. 2021, 1–18. [Google Scholar] [CrossRef]
  32. Chen, C.-H. A Hybrid Multi-Criteria Decision-Making Approach Based on ANP-Entropy TOPSIS for Building Materials Supplier Selection. Entropy 2021, 23, 1597. [Google Scholar] [CrossRef]
  33. dos Santos, B.M.; Godoy, L.P.; Campos, L.M.S. Performance evaluation of green suppliers using entropy-TOPSIS-F. J. Clean. Prod. 2019, 207, 498–509. [Google Scholar] [CrossRef]
  34. Reddy, A.S.; Kumar, P.R.; Raj, P.A. Entropy-based fuzzy TOPSIS framework for selection of a sustainable building material. Int. J. Constr. Manag. 2022, 22, 1194–1205. [Google Scholar] [CrossRef]
  35. Dwivedi, P.P.; Sharma, D.K. Application of Shannon entropy and CoCoSo methods in selection of the most appropriate engineering sustainability components. Clean. Mater. 2022, 5, 100118. [Google Scholar] [CrossRef]
  36. Hafezalkotob, A.; Hafezalkotob, A. Fuzzy entropy-weighted MULTIMOORA method for materials selection. J. Intell. Fuzzy Syst. 2016, 31, 1211–1226. [Google Scholar] [CrossRef]
  37. Jarrah, R.; Chen, C.-R.; Kassem, M. Ranking structural analysis software applications using AHP and Shannon’s entropy. J. Asian Arch. Build. Eng. 2022, 21, 900–907. [Google Scholar] [CrossRef]
  38. Kumar, V.; Saxena, P.; Garg, H. Selection of optimal software reliability growth models using an integrated entropy–Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) approach. Math. Methods Appl. Sci. 2021, 38, 2501–2520. [Google Scholar] [CrossRef]
  39. Asl, M.B.; Khalilzadeh, A.; Youshanlouei, H.R.; Mood, M.M. Identifying and ranking the effective factors on se-lecting Enterprise Resource Planning (ERP) system using the combined Delphi and Shannon Entropy approach. Procedia-Soc. Behav. Sci. 2012, 41, 513–520. [Google Scholar] [CrossRef] [Green Version]
  40. El-Araby, A.; Sabry, I.; El-Assal, A. A Comparative Study of Using MCDM Methods Integrated with Entropy Weight Method for Evaluating Facility Location Problem. Oper. Res. Eng. Sci. Theory Appl. 2022, 5, 121–138. [Google Scholar] [CrossRef]
  41. Kohansal, N.; Daneshdoost, F.; Bazyar, A.; Niroomand, S. An integrated MILP-MCDM decision framework for uncertain multi-criteria facilities location problem of glass industries. Int. J. Manag. Decis. Mak. 2020, 19, 207–238. [Google Scholar] [CrossRef]
  42. Nyimbili, P.H.; Erden, T. A Hybrid Approach Integrating Entropy-AHP and GIS for Suitability Assessment of Urban Emergency Facilities. ISPRS Int. J. Geo-Inf. 2020, 9, 419. [Google Scholar] [CrossRef]
  43. Gupta, P.; Mehlawat, M.K.; Grover, N. Intuitionistic fuzzy multi-attribute group decision-making with an application to plant location selection based on a new extended VIKOR method. Inf. Sci. 2016, 370–371, 184–203. [Google Scholar] [CrossRef]
  44. Zakeri, S.; Yang, Y.; Hashemi, M. Grey strategies interaction model. J. Strat. Manag. 2019, 12, 30–60. [Google Scholar] [CrossRef]
  45. Wu, Y.C.; Shih, M.C.; Tu, Y.K. Using normalized entropy to measure uncertainty of rankings for network me-ta-analyses. Med. Decis. Mak. 2021, 41, 706–713. [Google Scholar] [CrossRef] [PubMed]
  46. Lotfi, F.H.; Fallahnejad, R. Imprecise Shannon’s Entropy and Multi Attribute Decision Making. Entropy 2010, 12, 53–62. [Google Scholar] [CrossRef]
  47. Rathod, M.K.; Kanzaria, H.V. A methodological concept for phase change material selection based on multiple criteria decision analysis with and without fuzzy environment. Mater. Des. 2011, 32, 3578–3585. [Google Scholar] [CrossRef]
  48. Abo-Sinna, M.A.; Amer, A.H. Extensions of TOPSIS for multi-objective large-scale nonlinear programming problems. Appl. Math. Comput. 2005, 162, 243–256. [Google Scholar] [CrossRef]
  49. Chourabi, Z.; Khedher, F.; Babay, A.; Cheikhrouhou, M. Multi-criteria decision making in workforce choice using AHP, WSM and WPM. J. Text. Inst. 2019, 110, 1092–1101. [Google Scholar] [CrossRef]
  50. Hezer, S.; Gelmez, E.; Özceylan, E. Comparative analysis of TOPSIS, VIKOR and COPRAS methods for the COVID-19 Regional Safety Assessment. J. Infect. Public Health 2021, 14, 775–786. [Google Scholar] [CrossRef] [PubMed]
  51. Goswami, S.S.; Behera, D.K. Implementation of COPRAS and ARAS MCDM Approach for the Proper Selection of Green Cutting Fluid. In Current Advances in Mechanical Engineering; Springer: Singapore, 2021; pp. 975–987. [Google Scholar] [CrossRef]
  52. Tanino, T. Sensitivity Analysis in MCDM. In Multicriteria Decision Making; Springer: Boston, MA, USA, 1999; pp. 173–201. [Google Scholar] [CrossRef]
  53. Lee, H.C.; Chang, C.-T. Comparative analysis of MCDM methods for ranking renewable energy sources in Taiwan. Renew. Sustain. Energy Rev. 2018, 92, 883–896. [Google Scholar] [CrossRef]
  54. Saltelli, A.; Aleksankina, K.; Becker, W.; Fennell, P.; Ferretti, F.; Holst, N.; Li, S.; Wu, Q. Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environ. Model. Softw. 2019, 114, 29–39. [Google Scholar] [CrossRef]
  55. Kou, G.; Lu, Y.; Peng, Y.; Shi, Y. Evaluation of Classification Algorithms Using Mcdm and Rank Correlation. Int. J. Inf. Technol. Decis. Mak. 2012, 11, 197–225. [Google Scholar] [CrossRef]
  56. Qaradaghi, M.; Deason, J.P. Analysis of MCDM methods output coherence in oil and gas portfolio prioritization. J. Pet. Explor. Prod. Technol. 2018, 8, 617–640. [Google Scholar] [CrossRef]
  57. Tavana, M.; Soltanifar, M.; Santos-Arteaga, F.J. Analytical hierarchy process: Revolution and evolution. Ann. Oper. Res. 2021, 1–29. [Google Scholar] [CrossRef]
  58. Du, W.S. Subtraction and division operations on intuitionistic fuzzy sets derived from the Hamming distance. Inf. Sci. 2021, 571, 206–224. [Google Scholar] [CrossRef]
  59. Norton, G.; Salagean, A. On the Hamming distance of linear codes over a finite chain ring. IEEE Trans. Inf. Theory 2000, 46, 1060–1067. [Google Scholar] [CrossRef] [Green Version]
  60. Sun, Y.; Li, S.; Wang, Y.; Wang, X. Fault diagnosis of rolling bearing based on empirical mode decomposition and improved manhattan distance in symmetrized dot pattern image. Mech. Syst. Signal Process. 2021, 159, 107817. [Google Scholar] [CrossRef]
  61. Wu, Z.; Song, T.; Zhang, Y. Quantum k-means algorithm based on Manhattan distance. Quantum Inf. Process. 2022, 21, 1–10. [Google Scholar] [CrossRef]
  62. Istalkar, P.; Unnithan, S.L.K.; Biswal, B.; Sivakumar, B. A Canberra distance-based complex network classification framework using lumped catchment characteristics. Stoch. Hydrol. Hydraul. 2021, 35, 1293–1300. [Google Scholar] [CrossRef]
  63. Faisal, M.; Zamzami, E.M. Sutarman Comparative Analysis of Inter-Centroid K-Means Performance using Euclidean Distance, Canberra Distance and Manhattan Distance. J. Phys. Conf. Ser. 2020, 1566, 012112. [Google Scholar] [CrossRef]
  64. Jurman, G.; Riccadonna, S.; Visintainer, R.; Furlanello, C. Canberra distance on ranked lists. In Pro-ceedings of advances in ranking NIPS 09 workshop. Citeseer 2009, 22–27. [Google Scholar]
  65. Chen, T.-Y. New Chebyshev distance measures for Pythagorean fuzzy sets with applications to multiple criteria decision analysis using an extended ELECTRE approach. Expert Syst. Appl. 2020, 147, 113164. [Google Scholar] [CrossRef]
  66. Klove, T.; Lin, T.-T.; Tsai, S.-C.; Tzeng, W.-G. Permutation Arrays Under the Chebyshev Distance. IEEE Trans. Inf. Theory 2010, 56, 2611–2617. [Google Scholar] [CrossRef] [Green Version]
  67. Xu, H.; Zeng, W.; Zeng, X.; Yen, G.G. An Evolutionary Algorithm Based on Minkowski Distance for Many-Objective Optimization. IEEE Trans. Cybern. 2018, 49, 3968–3979. [Google Scholar] [CrossRef]
  68. Roche-Newton, O.; Rudnev, M. On the Minkowski distances and products of sum sets. Isr. J. Math. 2015, 209, 507–526. [Google Scholar] [CrossRef] [Green Version]
  69. Merigó, J.M.; Casanovas, M. A New Minkowski Distance Based on Induced Aggregation Operators. Int. J. Comput. Intell. Syst. 2011, 4, 123–133. [Google Scholar] [CrossRef]
  70. Liao, H.; Xu, Z. Approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making. Expert Syst. Appl. 2015, 42, 5328–5336. [Google Scholar] [CrossRef]
  71. McLachlan, G.J. Mahalanobis distance. Resonance 1999, 4, 20–26. Available online: https://www.ias.ac.in/article/fulltext/reso/004/06/0020-0026 (accessed on 9 September 2022). [CrossRef]
  72. De Maesschalck, R.; Jouan-Rimbaud, D.; Massart, D.L. The Mahalanobis distance. Chemom. Intell. Lab. Syst. 2000, 50, 1–18. [Google Scholar] [CrossRef]
  73. Nielsen, F.; Nock, R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2013, 21, 10–13. [Google Scholar] [CrossRef] [Green Version]
  74. Ye, N.; Borror, C.M.; Parmar, D. Scalable Chi-Square Distance versus Conventional Statistical Distance for Process Monitoring with Uncorrelated Data Variables. Qual. Reliab. Eng. Int. 2003, 19, 505–515. [Google Scholar] [CrossRef]
  75. Nielsen, F. On the Jensen–Shannon Symmetrization of Distances Relying on Abstract Means. Entropy 2019, 21, 485. [Google Scholar] [CrossRef] [Green Version]
  76. Nielsen, F. On a Generalization of the Jensen–Shannon Divergence and the Jensen–Shannon Centroid. Entropy 2020, 22, 221. [Google Scholar] [CrossRef] [Green Version]
  77. Behara, K.; Bhaskar, A.; Chung, E. A novel approach for the structural comparison of origin-destination matrices: Levenshtein distance. Transp. Res. Part C Emerg. Technol. 2020, 111, 513–530. [Google Scholar] [CrossRef]
  78. Berger, B.; Waterman, M.S.; Yu, Y.W. Levenshtein Distance, Sequence Comparison and Biological Database Search. IEEE Trans. Inf. Theory 2020, 67, 3287–3294. [Google Scholar] [CrossRef]
  79. Kosub, S. A note on the triangle inequality for the Jaccard distance. Pattern Recognit. Lett. 2019, 120, 36–38. [Google Scholar] [CrossRef] [Green Version]
  80. A Fligner, M.; Verducci, J.S.; E Blower, P. A Modification of the Jaccard–Tanimoto Similarity Index for Diverse Selection of Chemical Compounds Using Binary Strings. Technometrics 2002, 44, 110–119. [Google Scholar] [CrossRef]
  81. Li, X.; Wang, C.; Zhang, X.; Sun, W. Generic SAO Similarity Measure via Extended Sørensen-Dice Index. IEEE Access 2020, 8, 66538–66552. [Google Scholar] [CrossRef]
  82. Choi, S.S.; Cha, S.H.; Tappert, C.C. A survey of binary similarity and distance measures. J. Syst. Cybern. Inform. 2010, 8, 43–48. [Google Scholar]
  83. Bookstein, A.; Kulyukin, V.A.; Raita, T. Generalized Hamming Distance. Inf. Retr. J. 2002, 5, 353–375. [Google Scholar] [CrossRef]
  84. Robinson, D.W. Entropy and Uncertainty. Entropy 2008, 10, 493–506. [Google Scholar] [CrossRef] [Green Version]
  85. Dancelli, L.; Manisera, M.; Vezzoli, M. On Two Classes of Weighted Rank Correlation Measures Deriving from the Spearman’s ρ. In Statistical Models for Data Analysis; Springer: Heidelberg, Germany, 2013; pp. 107–114. [Google Scholar] [CrossRef]
  86. Sałabun, W.; Urbaniak, K. A New Coefficient of Rankings Similarity in Decision-Making Problems. In International Conference on Computational Science; Springer: Cham, Switzerland, 2020; pp. 632–645. [Google Scholar] [CrossRef]
Figure 1. Transposing the weighted separation measure matrix.
Figure 1. Transposing the weighted separation measure matrix.
Information 13 00512 g001
Figure 2. The EP compared with the alternatives’ data sets.
Figure 2. The EP compared with the alternatives’ data sets.
Information 13 00512 g002
Figure 3. The distance between EP and the data set of each material.
Figure 3. The distance between EP and the data set of each material.
Information 13 00512 g003
Figure 4. The difference between the rankings of materials generated by different MCDM methods.
Figure 4. The difference between the rankings of materials generated by different MCDM methods.
Information 13 00512 g004
Figure 5. The similarities of performance between the IVEP method and other MCDM methods computed by the Zakeri–Konstantas performance correlation coefficient.
Figure 5. The similarities of performance between the IVEP method and other MCDM methods computed by the Zakeri–Konstantas performance correlation coefficient.
Information 13 00512 g005
Figure 6. The analytical comparison of the total performance similarity of each MCDM method.
Figure 6. The analytical comparison of the total performance similarity of each MCDM method.
Information 13 00512 g006
Figure 7. The comparison between similarities of IVPEP’s outputs with other algorithms’ outputs, generated by the Zakeri–Konstantas performance correlation coefficient (ZK) and the Hamming distance (HD).
Figure 7. The comparison between similarities of IVPEP’s outputs with other algorithms’ outputs, generated by the Zakeri–Konstantas performance correlation coefficient (ZK) and the Hamming distance (HD).
Information 13 00512 g007
Figure 8. The comparison of the similarities of the MCDM algorithms’ outputs generated by the Zakeri–Konstantas performance correlation coefficient (ZK) and the Hamming distance (H).
Figure 8. The comparison of the similarities of the MCDM algorithms’ outputs generated by the Zakeri–Konstantas performance correlation coefficient (ZK) and the Hamming distance (H).
Information 13 00512 g008
Table 1. Material selection decision matrix.
Table 1. Material selection decision matrix.
Benefit (+)/Cost (−)+++++
w j 0.490100.167400.052800.052800.210900.02610
Latent Heat J/Kg (LH)Density Kg/m3 (D)Specific Heat kJ/kg KSpecific Heat kJ/kg KThermal Conductivity W/m K (K)Cost (C)
Calcium chloride hexahydrate169.981560.01.46002.13001.09000.2550
Stearic acid186.50903.002.83002.38000.18000.7450
p116190.00830.002.10002.10000.21000.3350
RT 60214.40850.000.90000.90000.20000.2550
Paraffin wax RT 30206.00789.001.80002.40000.18000.3350
n-Docosane194.60785.001.93002.38000.22000.3350
n-Octadecane245.00773.220.37672.26700.14000.3350
n-Nonadecane222.00775.801.71891.92100.14200.6650
n-Eicosane (A9)247.00776.330.74672.37700.13800.3350
Table 2. The equilibrium points (EP) in the IVEP method application process.
Table 2. The equilibrium points (EP) in the IVEP method application process.
CriteriaLatent Heat J/Kg (LH)Density Kg/m3 (D)Specific Heat kJ/kg KSpecific Heat kJ/kg KThermal Conductivity W/m K (K)Cost (C)
EP ( ϖ j )0.8440.7480.5670.6880.5630.671
Table 3. The ranks of materials obtained by IVEP using the information value (IV).
Table 3. The ranks of materials obtained by IVEP using the information value (IV).
MaterialsCalcium Chloride HexahydrateStearic Acidp116RT 60Paraffin Wax RT 30n-Docosanen-Octadecanen-Nonadecanen-Eicosane
Entropy ( E i )0.38260.37780.32790.32890.29600.32680.41290.32240.4080
IV ( ϕ i )0.92660.91490.79410.79660.71690.79141.00000.78080.9883
Rank346597182
Table 4. The ranks of materials generated by different MCDM methods.
Table 4. The ranks of materials generated by different MCDM methods.
IVEPTOPSISWPMCOPRASARAS
Calcium chloride hexahydrate 31111
Stearic acid44423
p116 66344
RT 60 57667
Paraffin wax RT 3099555
n-Docosane75232
n-Octadecane13998
n-Nonadecane88889
n-Eicosane22776
Table 5. The constant values of the significance of options/alternatives based on the number of options/alternatives.
Table 5. The constant values of the significance of options/alternatives based on the number of options/alternatives.
U i F R 1 F R 2 F R 3 F R 4 F R 5 F R 6 F R 7 F R 8 F R 9 F R 10 F
m = 10 0.22220.20000.17780.15560.13330.11110.08890.06670.04440.0222
m = 9 0.20000.17780.15560.13330.11110.08890.06670.04440.0222
m = 8 0.17780.15560.13330.11110.08890.06670.04440.0222
m = 7 0.15560.13330.11110.08890.06670.04440.0222
m = 6 0.13330.11110.08890.06670.04440.0222
m = 5 0.11110.08890.06670.04440.0222
m = 4 0.08890.06670.04440.0222
m = 3 0.06670.04440.0222
Table 6. The constant values of the performance of options/alternatives based on the number of options/alternatives.
Table 6. The constant values of the performance of options/alternatives based on the number of options/alternatives.
Γ i F R 1 F R 2 F R 3 F R 4 F R 5 F R 6 F R 7 F R 8 F R 9 F R 10 F
m = 10 2.00000.90000.53330.35000.24000.16670.11430.07500.04440.0200
m = 9 1.80000.80000.46670.30000.20000.13330.08570.05000.0222
m = 8 1.60000.70000.40000.25000.16000.10000.05710.0250
m = 7 1.40000.60000.33330.20000.12000.06670.0286
m = 6 1.20000.50000.26670.15000.08000.0333
m = 5 1.00000.40000.20000.10000.0400
m = 4 0.80000.30000.13330.0500
m = 3 0.60000.20000.0667
Table 7. The analytical comparison of the similarities of the performance of each MCDM method.
Table 7. The analytical comparison of the similarities of the performance of each MCDM method.
Ζ Κ ( l : h ) IVEPTOPSISWPMCOPRASARAS Ζ Κ ( l : h ) t
IVEP 70.84%39.44%35.11%29.25%41.015%
TOPSIS70.84% 49.38%46.19%46.30%52.609%
WPM39.44%49.38% 84.46%71.78%69.608%
COPRAS35.11%46.19%84.46% 70.46%68.698%
ARAS29.25%46.30%71.78%70.46% 65.761%
Table 8. Computing similarities and total similarities of ranks using the Hamming distance.
Table 8. Computing similarities and total similarities of ranks using the Hamming distance.
IVEPTOPSISWPMCOPRASARASTOTAL
IVEP 90.12%65.43%65.43%65.43%68.889%
TOPSIS90.12% 72.84%72.84%75.31%75.309%
WPM65.43%72.84% 95.06%92.59%86.667%
COPRAS65.43%72.84%95.06% 92.59%87.160%
ARAS65.43%75.31%92.59%92.59% 86.173%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zakeri, S.; Konstantas, D. Solving Decision-Making Problems Using a Measure for Information Values Connected to the Equilibrium Points (IVEP) MCDM Method and Zakeri–Konstantas Performance Correlation Coefficient. Information 2022, 13, 512. https://doi.org/10.3390/info13110512

AMA Style

Zakeri S, Konstantas D. Solving Decision-Making Problems Using a Measure for Information Values Connected to the Equilibrium Points (IVEP) MCDM Method and Zakeri–Konstantas Performance Correlation Coefficient. Information. 2022; 13(11):512. https://doi.org/10.3390/info13110512

Chicago/Turabian Style

Zakeri, Shervin, and Dimitri Konstantas. 2022. "Solving Decision-Making Problems Using a Measure for Information Values Connected to the Equilibrium Points (IVEP) MCDM Method and Zakeri–Konstantas Performance Correlation Coefficient" Information 13, no. 11: 512. https://doi.org/10.3390/info13110512

APA Style

Zakeri, S., & Konstantas, D. (2022). Solving Decision-Making Problems Using a Measure for Information Values Connected to the Equilibrium Points (IVEP) MCDM Method and Zakeri–Konstantas Performance Correlation Coefficient. Information, 13(11), 512. https://doi.org/10.3390/info13110512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop