*Article* **A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers**

#### **Dariusz Kacprzak**

Department of Mathematics, Faculty of Computer Science, Bialystok University of Technology, Wiejska 45A, 15-351 Bialystok, Poland; d.kacprzak@pb.edu.pl

**Abstract:** This paper presents an extension of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method with objective criteria weights for Group Decision Making (GDM) with Interval Numbers (INs). The proposed method is an alternative to popular and often used methods that aggregate the decision matrices provided by the decision makers (DMs) into a single group matrix, which is the basis for determining objective criteria weights and ranking the alternatives. It does not use an aggregation operator, but a transformation of the decision matrices into criteria matrices, in the case of determining objective criteria weights, and into alternative matrices, in the case of the ranking of alternatives. This ensures that all the decision makers' evaluations are taken into account instead of their certain average. The numerical example shows the ease of use of the proposed method, which can be implemented into common data analysis software such as Excel.

**Keywords:** interval numbers; MCGDM; TOPSIS; entropy; objective weights

#### **1. Introduction**

Recent years show that Multiple Criteria Decision Making (MCDM) methods are increasingly used to solve real decision-making problems concerning various aspects of human life [1–3]. The main application areas for these methods are supply chain management [4], logistics [5], engineering [6], technology [7], and many others. The complexity and diversity of MCDM problems have resulted in the development of a variety of methods to solve them [2]. One group of these methods are methods based on reference points. Historically, the first method which belongs to this group is the Hellwig method [8]. It uses a single reference point, called a "pattern". It is an artificial solution that maximizes benefit criteria and minimizes cost criteria. The computed synthetic indicator "proximity" of the alternatives to the "pattern" allows for their linear ordering and the identification of the best one. However, the most recognized and regularly used method in this group is TOPSIS, developed by Hwang and Yoon [9]. It uses two artificial solutions called the Positive Ideal Solution (PIS) and the Negative Ideal Solution (NIS). The PIS is equivalent to the "pattern" in Hellwig's method. In turn, the NIS minimizes the benefit criteria and maximizes the cost criteria. Taking into account the separation of the alternatives from the PIS and NIS, the Relative Closeness Coefficients (RCCs) to the PIS are calculated, which allows for the ranking of the alternatives.

The applications of the TOPSIS method are very diverse. Apart from the main applications of MCDM mentioned above, it is used in more and more new areas, such as flow control in a manufacturing system [10], the selection of sustainable acid rain control options [11], the selection of the best employees using decision support systems in internal control [12], credit risk evaluations for strategic partners [13], the investigation of aggregated social influence [14], the selection of stocks before the formation of a portfolio based on a company's financial performance [15], the identification of the best wind turbines for different locations [16], the ranking of the developmental performance of nations [17], the

**Citation:** Kacprzak, D. A Novel Extension of the Technique for Order Preference by Similarity to Ideal Solution Method with Objective Criteria Weights for Group Decision Making with Interval Numbers. *Entropy* **2021**, *23*, 1460. https:// doi.org/10.3390/e23111460

Academic Editor: Meimei Xia

Received: 27 September 2021 Accepted: 31 October 2021 Published: 3 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

evaluation of the quality of institutions in the European Union countries [18], the evaluation of technologies improving the quality of life of elderly people [19], and many others.

In real-life problems, it may be difficult to measure data accurately or present the preferences of the DMs by real numbers; it may also happen that DMs use linguistic variables, in which case we can use another format of data. In such situations, MCDM methods, including TOPSIS, should be extended from real numbers to the new type of data. In the literature, we can find a number of extensions of the TOPSIS method for different types of data: fuzzy numbers [20], ordered fuzzy numbers [21], hesitant fuzzy sets [22], intuitionistic fuzzy sets [23], hesitant Pythagorean fuzzy sets [24], interval-valued fuzzy sets [25], interval neutrosophic sets [26], and others. This shows that researchers are developing new ways of presenting data to allow DMs to formulate their preferences more effectively. We can say that the choice of a data presentation method is an MCDM problem.

In this paper we use INs. An extension of the TOPSIS method to MCDM problems with INs was developed by Jahanshahloo et al. [27]. A limitation of this approach is the definitions of the PIS and NIS. These reference points are represented by real numbers selected from the lower and upper endpoints of the INs in the decision matrix, rather than by INs themselves. This can lead to incorrect results [28]. In the literature, various methods for determining the PIS and NIS for INs have been proposed. In [29,30], they are represented by real numbers instead of intervals, as in [27]. In [31,32], the PIS is defined as an interval whose endpoints are the maximum values from the lower and upper endpoints of the intervals, respectively, while for the NIS we take the minimum values of these endpoints. In [33], the PIS is the average of intervals, while for the NIS, the lower endpoints are the minimum of the lower endpoints of the intervals and the upper endpoints are the maximum of the upper endpoints of the intervals, respectively. The main limitation of these methods is that the determined elements of the PIS and NIS may not be elements of the decision matrix. Dymova et al. [28] presented a method of comparing INs to determine the minimum and maximum elements from the decision matrix. It is based on determining the distance between the midpoints of the INs being compared. In the proposed approach, we will use an analogous method of comparing INs, as proposed by Hu and Wang [34].

An important step in MCDM methods, including the TOPSIS method, is the determination of criteria weights. These describe the importance of each criterion in the decision-making process and have a key influence on the final result. We usually use subjective or objective weights in solving MCDM problems. Subjective weights are determined by the DM or an expert, using their knowledge, experience, skills, etc. In situations where we cannot obtain the appropriate weights or the cost of obtaining them is too high, we can use objective weights. These are determined by using mathematical methods based on the decision matrix. One of the popular methods for determining objective weights is the entropy method [9]. It assigns a higher weight to the given criterion, regarding which the evaluations of alternatives are more diversified. Hosseinzadeh Lotfi and Fallahnejad [35] proposed an extension of the entropy method to data in the form of INs. As a result, we can obtain objective criteria weights, also in the form of INs.

Because of the increasing complexity of decision-making problems, they are often analyzed by a group of DMs, which leads to the development of so-called Multiple Criteria Group Decision Making (MCGDM). In such situations, each member of the group defines an individual decision matrix. A common technique is to determine the aggregate (group) matrix from the individual matrices using a selected aggregation operator. This matrix is the basis for determining objective criteria weights and ranking the alternatives. One of the most popular aggregation operators is the arithmetic mean. Note, however, that this may not reflect the preferences or judgments of DMs [36]. To better explain this limitation, we present two simple numerical examples. We consider a group of two decision makers {*DM*1, *DM*2} who evaluate three alternatives {*A*1, *A*2, *A*3} with respect to two benefit criteria {*C*1, *C*2} using the following scale: {1, 2, 3, 4, 5}. Their evaluations of the alternatives with respect to the criteria are in the form of individual decision matrices *X*<sup>1</sup> and *X*2; by *XART* we denote the aggregation results using the arithmetic mean.

**Example 1.** *The ratings of the alternatives with respect to the criteria provided by the DMs are:*

$$X\_1 = \begin{array}{ccccc} DM\_1 & \mathbb{C}\_1 & \mathbb{C}\_2 & & DM\_2 & \mathbb{C}\_1 & \mathbb{C}\_2 \\ A\_1 & \begin{pmatrix} 1 & 1 \\ 2 & 2 \\ 4 & 3 \end{pmatrix} & \cdot & X\_2 = & \begin{matrix} A\_1 & & \begin{pmatrix} 3 & 3 \\ 2 & 2 \\ 1 & 1 \end{pmatrix} \\ \end{array} \\ \end{array} \\ \begin{array}{ccccc} D\_1 & & & \mathbb{C}\_1 & \mathbb{C}\_2 \\ A\_2 & & & \\ \end{array} \\ \end{array}$$

*Let us note that regardless of whether the ratings of the alternatives with respect to a criterion are in the form "1 and 3", "2 and 2", or "3 and 1", the aggregation results are the same and equal to "2". The aggregation results are:*

$$X\_{AGG} = \begin{array}{c} A\_1 \\ A\_2 \\ A\_3 \end{array} \quad \left(\begin{array}{ccc} \mathcal{C}\_1 & \mathcal{C}\_2 \\ \mathcal{D} & \mathcal{D} \\ \mathcal{D}.5 & \mathcal{D} \end{array}\right) \cdot \cdot$$

*Based on matrix XAGG*, *and using the entropy method, we can calculate the criteria weights, obtaining the following vector:*

$$w\_{AGG} = (1,0)\dots$$

*This means that criterion C*<sup>2</sup> *has no influence on the ranking of the alternatives and can be omitted. On the other hand, using the proposed approach to the matrices X*<sup>1</sup> *and X*2, *we obtain the following vector of criteria weights:*

$$w = (0.5921, 0.4079)\dots$$

**Example 2.** *The ratings of the alternatives with respect to the criteria provided by the DMs are:*

$$X\_1 = \begin{array}{c} DM\_1 \\ A\_1 \\ A\_2 \\ A\_3 \end{array} \quad \begin{array}{c} C\_1 \\ \begin{array}{c} \text{C}\_1 \\ \text{3} \\ \text{1} \end{array} \begin{array}{c} C\_2 \\ \text{3} \\ \text{1} \end{array} \end{array} \quad \begin{array}{c} DM\_2 \\ A\_1 \\ A\_2 \\ A\_3 \end{array} \left( \begin{array}{c} \text{1} \\ \text{3} \\ \text{5} \\ \text{1} \end{array} \right) \dots$$

*The aggregation results are:*

$$X\_{AGG} = \begin{array}{c} A\_1 \\ A\_2 \\ A\_3 \end{array} \quad \begin{pmatrix} \mathcal{C}\_1 & \mathcal{C}\_2 \\ \mathcal{C}\_3 & \mathcal{D} \\ \mathcal{C}\_3 & \mathcal{D} \end{pmatrix} \quad .$$

*Matrix XAGG shows that all three alternatives*{*A*1, *A*2, *A*3} *are equivalent (i.e., they have the same aggregate rating) and we cannot calculate the vector of criteria weights using the entropy method. However, if we use the proposed approach, we obtain the following vector of criteria weights:*

$$w = (0.6497, \, 0.3503)\,\text{J}$$

From Examples 1 and 2, we can conclude that such an averaged result does not reflect the discrepancies between the individual decisions (the preferences of the DMs) and the fact that using such averaged information may lead to an incorrect final decision.The aim of this paper is to present a new approach for GDM using the TOPSIS method and objective criteria weights with INs. The first main contribution of this paper is a method for determining the objective criteria weights for GDM without aggregating individual decision matrices. The method involves transforming the individual decision matrices into criteria matrices and using the interval entropy and the interval TOPSIS methods to determine the objective criteria weights. In this method, unlike in the method proposed by Hosseinzadeh Lotfi and Fallahnejad [35], as the final result, we receive the weights in the form of real numbers. The second main contribution of this paper is the TOPSIS method for GDM, also without the aggregation of individual decision matrices. This method involves transforming the decision matrices into matrices of alternatives and then using a new interval TOPSIS method for the ranking of alternatives.

The remainder of the paper consists of the following sections. Section 2 presents basic information about INs and a description of the classical TOPSIS method and the classical entropy method. The main section of the paper, i.e., Section 3, presents the algorithm of the proposed method in detail. Next, the proposed method is used in a numerical example and compared with other, similar approaches which are based on the aggregation of individual matrices. The paper ends with the conclusions.

#### **2. Preliminaries**

In the following, we present some basic information about INs, the classical TOPSIS method, and the entropy method of determining criteria weights.

#### *2.1. Interval Numbers*

**Definition 1.** *As proposed by [37]: The closed IN, denoted by* [*a*, *a*]*, is the set of real numbers given by:*

$$[\underline{a}, \overline{a}] = \{ \underline{x} \in \mathbb{R} \; : \; \underline{a} \le \underline{x} \le \overline{a} \} \; . \tag{1}$$

Throughout this paper, INs will be used in the interval TOPSIS and interval entropy methods, so we assume that they are positive INs, i.e., *a* > 0.

**Definition 2.** *As proposed by [37]: Let* [*a*, *<sup>a</sup>*] *and b*, *b be two positive INs, and λ* > 0 *be a real number. Then:*

$$\begin{aligned} \left[\underline{a},\overline{a}\right] &= \left[\underline{b},\overline{b}\right] \text{ if } \underline{a} = \underline{b} \text{ and } \overline{a} = \overline{b}, \\ \left[\underline{a},\overline{a}\right] + \left[\underline{b},\overline{b}\right] &= \left[\underline{a} + \underline{b}, \overline{a} + \overline{b}\right], \\ \left[\underline{a},\overline{a}\right] - \left[\underline{b},\overline{b}\right] &= \left[\underline{a} - \overline{b}, \overline{a} - \underline{b}\right], \\ \left[\underline{a},\overline{a}\right] \cdot \left[\underline{b},\overline{b}\right] &= \left[\underline{a} \cdot \underline{b}, \overline{a} \cdot \overline{b}\right], \\ \left[\underline{a},\overline{a}\right] / \left[\underline{b},\overline{b}\right] &= \left[\underline{a}/\overline{b}, \overline{a}/\underline{b}\right], \\ \lambda \cdot \left[\underline{a},\overline{a}\right] &= \left[\lambda \cdot \underline{a} \,\lambda \cdot \overline{a}\right]. \end{aligned}$$

The TOPSIS method requires the determination of the minimum and maximum elements. To compare INs, we apply the method developed by Hu and Wang [34]. It is based on a different description of INs than Equation (1) used in Definition 1.

**Definition 3.** *As proposed by [34]: The IN* [*a*, *a*] *is represented in the form:*

$$
\langle \langle m(\left[\underline{a}\,\,\overline{a}\right]); w(\left[\underline{a}\,\,\overline{a}\right]) \rangle \rangle \tag{2}
$$

*where m*([*a*, *a*]) *and w*([*a*, *a*]) *are its mid-point and half-width, respectively, determined as follows:*

$$m(\left[\underline{a}, \overline{a}\right]) = \frac{\underline{a} + \overline{a}}{2} \; , \tag{3}$$

*and:*

$$w(\left[\underline{\mathfrak{a}},\overline{\mathfrak{a}}\right]) = \frac{\overline{\mathfrak{a}} - \underline{\mathfrak{a}}}{2} \,. \tag{4}$$

Using the representation from Equation (2), Hu and Wang defined the order relation "≺=" for INs as follows.

**Definition 4.** *As proposed by [34]: Let* [*a*, *<sup>a</sup>*] *and b*, *b be two INs. Then:*

$$\left[\underline{\boldsymbol{\varrho}},\overline{\boldsymbol{\mathfrak{a}}}\right] \prec\_{-} \left[\underline{\boldsymbol{\mathfrak{b}}},\overline{\boldsymbol{\mathfrak{b}}}\right] \\
iff \left\{ \begin{array}{ll} m(\left[\underline{\boldsymbol{\mathfrak{a}}},\overline{\boldsymbol{\mathfrak{a}}}\right]) < m\left(\left[\underline{\boldsymbol{\mathfrak{b}}},\overline{\boldsymbol{\mathfrak{b}}}\right]\right), & \operatorname{if} & m(\left[\underline{\boldsymbol{\mathfrak{a}}},\overline{\boldsymbol{\mathfrak{a}}}\right]) \neq m\left(\left[\underline{\boldsymbol{\mathfrak{b}}},\overline{\boldsymbol{\mathfrak{b}}}\right]\right) \\\\ \boldsymbol{w}(\left[\underline{\boldsymbol{\mathfrak{a}}},\overline{\boldsymbol{\mathfrak{a}}}\right]) \ge \boldsymbol{w}\left(\left[\underline{\boldsymbol{\mathfrak{b}}},\overline{\boldsymbol{\mathfrak{b}}}\right]\right), & \operatorname{if} & m(\left[\underline{\boldsymbol{\mathfrak{a}}},\overline{\boldsymbol{\mathfrak{a}}}\right]) = m\left(\left[\underline{\boldsymbol{\mathfrak{b}}},\overline{\boldsymbol{\mathfrak{b}}}\right]\right) \end{array} \right. \tag{5}$$

*and:*

$$
\begin{bmatrix} \underline{a} \ \overline{a} \end{bmatrix} \prec \begin{bmatrix} \underline{b} \ \overline{b} \end{bmatrix} \\
iff \begin{bmatrix} \underline{a} \ \overline{a} \end{bmatrix} \begin{bmatrix} \underline{a} \ \overline{a} \end{bmatrix} \prec \begin{bmatrix} \underline{b} \ \overline{b} \end{bmatrix} \\
\text{and } \begin{bmatrix} \underline{a} \ \overline{a} \end{bmatrix} \neq \begin{bmatrix} \underline{b} \ \overline{b} \end{bmatrix}. \tag{6}
$$

#### *2.2. The Classical TOPSIS Method*

Suppose an MCDM problem is given. The solution of the problem involves the linear ordering of the set of possible alternatives {*A*1, *A*2,..., *Am*} and the indication of the best one. The alternatives under consideration are evaluated with respect to a set of criteria {*C*1, *C*2,..., *Cn*} that determine the choice of a solution. An MCDM problem is represented by a decision matrix *X*, of the form:

$$X = \begin{pmatrix} \mathbf{x}\_{11} & \mathbf{x}\_{12} & \cdots & \mathbf{x}\_{1n} \\ \mathbf{x}\_{21} & \mathbf{x}\_{22} & \cdots & \mathbf{x}\_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{x}\_{m1} & \mathbf{x}\_{m2} & \cdots & \mathbf{x}\_{mn} \end{pmatrix} \tag{7}$$

where *xij* for *i* = 1, 2, ... , *m* and *j* = 1, 2, ... , *n* represents the evaluation of the *i*th alternative with respect to the *j*th criterion. In addition, we determine the vector criteria weights *w* = (*w*1, *w*2,..., *wn*). The classical TOPSIS method developed by Hwang and Yoon consists of the following steps [9]:

**Step 1.** The normalization of the decision matrix *X* and calculation of the matrix *Y*, of the form:

$$Y = \begin{pmatrix} \mathcal{Y}\_{11} & \mathcal{Y}\_{12} & \cdots & \mathcal{Y}\_{1n} \\ & \mathcal{Y}\_{21} & \mathcal{Y}\_{22} & \cdots & \mathcal{Y}\_{2n} \\ & \vdots & \vdots & \ddots & \vdots \\ & \mathcal{Y}\_{m1} & \mathcal{Y}\_{m2} & \cdots & \mathcal{Y}\_{mn} \end{pmatrix} \tag{8}$$

using, for *j* = 1, .., *n*, the following formula:

$$y\_{ij} = \frac{\mathbf{x}\_{ij}}{\sqrt{\sum\_{i=1}^{m} \mathbf{x}\_{ij}^{2}}} \cdot \tag{9}$$

**Step 2.** The calculation of the weighted normalized decision matrix *V*, of the form:

$$V = \begin{pmatrix} v\_{11} & v\_{12} & \cdots & v\_{1n} \\ & v\_{21} & v\_{22} & \cdots & v\_{2n} \\ & \vdots & \vdots & \ddots & \vdots \\ & v\_{m1} & v\_{m2} & \cdots & v\_{mn} \end{pmatrix} \tag{10}$$

where *vij* = *wj*·*yij* for *i* = 1, 2, . . . , *m* and *j* = 1, 2, . . . , *n*. **Step 3.** Determination of the PIS (*A*+), of the form:

$$A^{+} = \left(v\_1^{+}, v\_2^{+}, \dots, v\_n^{+}\right) = \left\{ \left(\max\_{\vec{i}} v\_{\vec{i}\vec{j}} \mid j \in B\right), \left(\min\_{\vec{i}} v\_{\vec{i}\vec{j}} \mid j \in \mathbb{C}\right) \right\},\tag{11}$$

and of the NIS (*A*−), of the form:

$$A^{-} = \left(v\_1^{-}, v\_2^{-}, \dots, v\_n^{-}\right) = \left\{ \left(\min\_{\vec{i}} v\_{\vec{i}\vec{j}} \mid j \in B\right), \left(\max\_{\vec{i}} v\_{\vec{i}\vec{j}} \mid j \in \mathbb{C}\right) \right\},\tag{12}$$

where *B* and *C* are associated with benefit and cost criteria, respectively. **Step 4.** The calculation of the distance of each *Ai* (*i* = 1, . . . , *m*) from the PIS:

$$d\_i^+ = \sqrt{\sum\_{j=1}^n \left(v\_{ij} - v\_j^+\right)^2} \,\tag{13}$$

and from the NIS:

$$d\_i^- = \sqrt{\sum\_{j=1}^n \left(v\_{ij} - v\_j^-\right)^2}.\tag{14}$$

**Step 5.** The calculation of the coefficients *RCCi* (*i* = 1, 2, . . . , *m*) of relative closeness to the PIS for each alternative *Ai* (*i* = 1, . . . , *m*), using the following formula:

$$\text{RCC}\_{i} = \frac{d\_{i}^{-}}{d\_{i}^{+} + d\_{i}^{-}}.\tag{15}$$

**Step 6.** The ranking of alternatives in descending order, using *RCCi*, and the determination of the best one (the one with the highest value of *RCCi*).

#### *2.3. The Entropy Method*

The starting point for determining objective criteria weights by the entropy method is the decision matrix, Equation (7) (see Section 2.2). It consists of the following steps [9]: **Step 1.** The normalization of the decision matrix *X* and the calculation of the matrix *Y*, of the form:

$$Y = \begin{pmatrix} y\_{11} & y\_{12} & \cdots & y\_{1n} \\ & y\_{21} & y\_{22} & \cdots & y\_{2n} \\ & \vdots & \vdots & \ddots & \vdots \\ & y\_{m1} & y\_{m2} & \cdots & y\_{mn} \end{pmatrix} \tag{16}$$

using the following formula for *j* = 1, .., *n*:

$$y\_{ij} = \frac{x\_{ij}}{\sum\_{i=1}^{m} x\_{ij}}.\tag{17}$$

**Step 2.** The calculation of the vector of entropy *e* = (*e*1,*e*2,...,*en*), using the following formula for *j* = 1, . . . , *n*:

$$\mathcal{C}\_{\vec{l}} = -\frac{1}{\ln m} \sum\_{i=1}^{m} y\_{ij} \ln y\_{ij}. \tag{18}$$

Moreover, when *yij* = 0 for some *i*, the value of *yij* ln *yij* is taken as 0, which is consistent with lim*x*→0<sup>+</sup>*<sup>x</sup>* ln *<sup>x</sup>* <sup>=</sup> 0.

**Step 3.** The calculation of the vector of diversification *d* = (*d*1, *d*2,..., *dn*), using the following formula for *j* = 1, . . . , *n*:

$$d\_{\hat{\jmath}} = 1 - \varepsilon\_{\hat{\jmath}}.\tag{19}$$

**Step 4.** The calculation of the vector of objective criteria weights *w* = (*w*1, *w*2,..., *wn*), where:

$$w\_j = \frac{d\_j}{\sum\_{j=1}^{n} d\_j}.\tag{20}$$

#### **3. The Proposed Approach**

The proposed extension of the TOPSIS method with objective criteria weights based on interval data for GDM consists of three major stages:


A flow chart and a graphical scheme of the proposed method are shown in Figures 1 and 2, respectively.

**Stage 1:** The preparation of the data. As in Section 2.2., suppose an MCDM problem for GDM is given, which consists of a set of possible alternatives {*A*1, *A*2,..., *Am*} and a set of criteria {*C*1, *C*2,..., *Cn*}. In this case, the evaluation of alternatives, with respect to the criteria, is performed by a group of DMs or experts {*DM*1, *DM*2,..., *DMK*}. In the process of GDM, each *DMk* (*k* = 1, 2, . . . , *K*) constructs a matrix, called the individual decision matrix, of the form:

$$X^{k} = \begin{array}{c} DM\_{k} \\ \end{array} \begin{array}{c} \mathbb{C}\_{1} \\ \quad A\_{1} \\ \quad \left(\begin{array}{ccccc} \mathbf{x}\_{11}^{k} & \mathbf{x}\_{12}^{k} & \dots & \mathbf{x}\_{1n}^{k} \\ \mathbf{x}\_{21}^{k} & \mathbf{x}\_{22}^{k} & \dots & \mathbf{x}\_{2n}^{k} \\ \mathbf{x}\_{21}^{k} & \mathbf{x}\_{22}^{k} & \dots & \mathbf{x}\_{2n}^{k} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{x}\_{m1}^{k} & \mathbf{x}\_{m2}^{k} & \dots & \mathbf{x}\_{mn}^{k} \\ \end{array} \right) . \tag{21}$$

In the proposed approach, each element *x<sup>k</sup> ij* for *i* = 1, 2, ... , *m* and *j* = 1, 2, ... , *n* of the matrix *X<sup>k</sup>* is in the form of an IN, i.e., *x<sup>k</sup> ij* = *xk ij*, *<sup>x</sup><sup>k</sup> ij* , and represents the evaluation of the *k*th DM of the *i*th alternative with respect to the *j*th criterion.

**Stage 2:** The calculation of the objective criteria weights for GDM, without the aggregation of individual decision matrices. The proposed method of calculation of the objective criteria weights based on interval entropy and interval TOPSIS consists of the following steps.

**Step 1.** The normalization, for each decision maker *DMk* (*k* = 1, 2, . . . , *K*), of their individual decision matrix, as given by Equation (21), and obtaining the matrix *Yk*, of the form:

$$\mathbf{Y}^{k} = \begin{array}{c} DM\_{k} \\ \end{array} \quad \begin{array}{c} \mathbb{C}\_{1} \\ \quad A\_{1} \\ \quad \begin{pmatrix} y\_{11}^{k} & y\_{12}^{k} & \dots & y\_{1n}^{k} \\ \hline \end{pmatrix} \\ \vdots \\ \quad \begin{pmatrix} \vdots & \vdots & \ddots & \vdots \\ \vdots & \vdots & \ddots & \vdots \\ \vdots & \vdots & \ddots & \vdots \\ y\_{m1}^{k} & y\_{m2}^{k} & \dots & y\_{mn}^{k} \\ \end{pmatrix} \end{array} \tag{22}$$

using the following formula for *j* = 1, .., *n* [35]:

$$\boldsymbol{y}\_{ij}^{k} = \begin{cases} \begin{bmatrix} \frac{\boldsymbol{\Delta}\_{ij}^{k}}{\sum\_{i=1}^{m} \mathbf{x}\_{ij}^{k}} \cdot \frac{\mathbf{x}\_{ij}^{k}}{\sum\_{i=1}^{m} \mathbf{x}\_{ij}^{k}} \end{bmatrix} & \quad \text{if} \quad j \in B\\ \begin{bmatrix} \frac{1/\mathbf{x}\_{ij}^{k}}{\sum\_{i=1}^{m} 1/\mathbf{x}\_{ij}^{k}} \frac{1/\mathbf{x}\_{ij}^{k}}{\sum\_{i=1}^{m} 1/\mathbf{x}\_{ij}^{k}} \end{bmatrix} & \quad \text{if} \quad j \in C \end{cases} \tag{2.3}$$

**Figure 1.** The conceptual framework of the proposed method.

**Figure 2.** Hierarchical structure of the proposed method.

**Step 2.** The construction, for each criterion *Cj* (*j* = 1, 2, . . . , *n*), of the matrix *V<sup>j</sup>* , of the form:

$$V^{\vec{j}} = \begin{array}{c} \text{C}\_{\vec{j}} \\ A\_1 \\ A\_2 \\ \vdots \\ A\_m \end{array} \quad \left( \begin{array}{ccccc} DM\_1 & DM\_2 & \cdots & DM\_K \\ y\_{1j}^1 & y\_{1j}^2 & \cdots & y\_{1j}^K \\ \cdot & y\_{2j}^1 & y\_{2j}^2 & \cdots & y\_{2j}^K \\ \vdots & \vdots & \ddots & \vdots \\ \cdot & \vdots & \ddots & \vdots \\ y\_{mj}^1 & y\_{mj}^2 & \cdots & y\_{mj}^K \end{array} \right) . \tag{24}$$

**Step 3.** The calculation, for each criterion *Cj* (*j* = 1, 2, . . . , *n*), of the entropy vector *ej*, of the form: 

$$e\_{\dot{j}} = \left(e\_{\dot{j}}^1, e\_{\dot{j}}^2, \dots, e\_{\dot{j}}^K\right) \tag{25}$$

based on the matrix *V<sup>j</sup>* , where *e<sup>k</sup> <sup>j</sup>* = *ek <sup>j</sup>* ,*e<sup>k</sup> j* for *k* = 1, 2, . . . , *K* and:

$$\underline{\mathbf{e}}\_{j}^{k} = \min \left\{ -\frac{1}{\ln m} \sum\_{i=1}^{m} \underline{y}\_{ij}^{k} \ln \underline{y}\_{ij'}^{k} - \frac{1}{\ln m} \sum\_{i=1}^{m} \overline{y}\_{ij}^{k} \ln \overline{y}\_{ij}^{k} \right\},\tag{26}$$

and:

$$\overline{\varepsilon}\_{j}^{k} = \max \left\{ -\frac{1}{\ln m} \sum\_{i=1}^{m} \underline{y}\_{ij}^{k} \ln \underline{y}\_{ij'}^{k} - \frac{1}{\ln m} \sum\_{i=1}^{m} \overline{y}\_{ij}^{k} \ln \overline{y}\_{ij}^{k} \right\},\tag{27}$$

and *y<sup>k</sup> ij* ln *<sup>y</sup><sup>k</sup> ij* or *<sup>y</sup><sup>k</sup> ij* ln *<sup>y</sup><sup>k</sup> ij* is defined to be 0 if *<sup>y</sup><sup>k</sup> ij* <sup>=</sup> 0 or *<sup>y</sup><sup>k</sup> ij* = 0 [35], respectively. **Step 4.** The calculation, for each criterion *Cj* (*j* = 1, 2, . . . , *n*), of the diversification vector *dj*, of the form:

$$d\_{\vec{\jmath}} = \left(d\_{\vec{\jmath}}^1, d\_{\vec{\jmath}}^2, \dots, d\_{\vec{\jmath}}^K\right) \tag{28}$$

where *d<sup>k</sup> <sup>j</sup>* = <sup>1</sup> − *<sup>e</sup><sup>k</sup> <sup>j</sup>* = <sup>1</sup> <sup>−</sup> *<sup>e</sup><sup>k</sup> <sup>j</sup>* , 1 − *<sup>e</sup><sup>k</sup> j* for *k* = 1, 2, ... , *K*, and the construction of diversification matrix *D*, of the form:

$$D = \begin{array}{c} \text{C}\_1 \\ \text{C}\_2 \\ \vdots \\ \text{C}\_n \end{array} \quad \left( \begin{array}{cccc} DM\_1 & DM\_2 & \cdots & DM\_K \\ d\_1^1 & d\_1^2 & & \cdots & d\_1^K \\ d\_2^1 & d\_2^2 & & \cdots & d\_2^K \\ & \vdots & \vdots & \ddots & \vdots \\ & \vdots & \vdots & \ddots & \vdots \\ d\_n^1 & d\_n^2 & & \cdots & d\_n^K \end{array} \right) . \tag{29}$$

**Step 5.** The determination of the Most Important Criterion (MIC):

$$\mathbb{C}^+ = \left(c\_1^+, c\_2^+, \dots, c\_K^+\right) \tag{30}$$

where *c*<sup>+</sup> *<sup>k</sup>* <sup>=</sup> max*<sup>j</sup> dk <sup>j</sup>* for *k* = 1, 2, . . . , *K*, and of the Least Important Criterion (LIC):

$$\mathbb{C}^- = \left( \mathbf{c}\_1^-, \mathbf{c}\_2^-, \dots, \mathbf{c}\_K^- \right) \tag{31}$$

where *c*− *<sup>k</sup>* = [0, 0] for *k* = 1, 2, . . . , *K*, based on the matrix *D*.

**Step 6.** The calculation of the distance of each diversification vector *dj*, representing the weight of criterion *Cj* (*j* = 1, 2, . . . , *n*), from the MIC:

$$d\_j^{C+} = \sqrt{\sum\_{k=1}^{K} \left[ \left( \underline{d}\_j^k - \underline{\mathfrak{c}}\_k^+ \right)^2 + \left( \overline{d}\_j^k - \overline{\mathfrak{c}}\_k^+ \right)^2 \right]},\tag{32}$$

and from the LIC:

$$d\_{\vec{j}}^{\mathbb{C}-} = \sqrt{\sum\_{k=1}^{K} \left[ \left( \underline{d}\_{\vec{j}}^{k} - \underline{c}\_{k}^{-} \right)^{2} + \left( \overline{d}\_{\vec{j}}^{k} - \overline{c}\_{k}^{-} \right)^{2} \right]}. \tag{33}$$

**Step 7.** The calculation of the coefficients *RCC<sup>C</sup> <sup>j</sup>* (*j* = 1, 2, . . . , *n*) of relative closeness to the MIC for each diversification vector *dj*, using the following formula:

$$RCC^{\mathbb{C}}\_{\vec{j}} = \frac{d\_{\vec{j}}^{\mathbb{C}^{-}}}{d\_{\vec{j}}^{\mathbb{C}^{+}} + d\_{\vec{j}}^{\mathbb{C}^{-}}}.\tag{34}$$

**Step 8.** The calculation of the vector of objective criteria weights:

$$w = (w\_1, w\_2, \dots, w\_n) \tag{35}$$

where:

$$w\_{j} = \frac{\text{RCC}\_{j}^{\text{C}}}{\sum\_{j=1}^{n} \text{RCC}\_{j}^{\text{C}}} \tag{36}$$

for *j* = 1, 2, . . . , *n*.

**Stage 3:** The extended TOPSIS method for GDM without the aggregation of individual decision matrices.

The developed extended TOPSIS for GDM without the aggregation of individual decision matrices consists of the following steps.

**Step 1.** The normalization, for each decision maker *DMk* (*k* = 1, 2, . . . , *K*), of their individual decision matrix, as given by Equation (21), and obtaining the matrix *Yk*, of the form

$$\begin{array}{ccccc} DM\_k & \mathbb{C}\_1 & \mathbb{C}\_2 & \cdots & \mathbb{C}\_n \\ A\_1 & & & & \\ Y^k = & A\_2 & & & \\ & \vdots & & \begin{pmatrix} y\_{11}^k & y\_{12}^k & \cdots & y\_{1n}^k \\ y\_{21}^k & y\_{22}^k & \cdots & y\_{2n}^k \\ \vdots & \vdots & \ddots & \vdots \\ y\_{m1}^k & y\_{m2}^k & \cdots & y\_{mn}^k \end{pmatrix} \\ & & & \begin{pmatrix} y\_{11}^k & y\_{12}^k & \cdots & y\_{1n}^k \end{pmatrix} \end{array} \tag{37}$$

using the following formula for *j* = 1, . . . , *n* [38]:

$$y\_{ij}^k = \begin{cases} \begin{bmatrix} \frac{\mathbf{x}\_{ij}^k}{\sum\_{i=1}^m \mathbf{x}\_{ij}^k} \frac{\mathbf{x}\_{ij}^k}{\sum\_{i=1}^m \mathbf{x}\_{ij}^k} \end{bmatrix} & \text{if } \quad j \in B\\ \begin{bmatrix} \frac{1/\overline{\mathbf{x}}\_{ij}^k}{\sum\_{i=1}^m 1/\sum\_{i=1}^k \sum\_{i=1}^m 1/\overline{\mathbf{x}}\_{ij}^k} \end{bmatrix} & \text{if } \quad j \in C \end{cases} \tag{38}$$

**Remark 1.** *Note that the normalization method, Equation (38), used above does not provide the property that the normalized elements y<sup>k</sup> ij belong to the interval* [0, 1]*. If we require this property to be satisfied, the elements of the matrix Y<sup>k</sup> can be recalculated using the following formula [38]:*

$$z\_{ij}^k = \left[\frac{\underline{y}\_{ij}^k}{\sqrt{\sum\_{i=1}^m \left[ \left( \underline{y}\_{ij}^k \right)^2 + \left( \overline{y}\_{ij}^k \right)^2 \right]}}, \frac{\overline{y}\_{ij}^k}{\sqrt{\sum\_{i=1}^m \left[ \left( \underline{y}\_{ij}^k \right)^2 + \left( \overline{y}\_{ij}^k \right)^2 \right]}}\right]. \tag{39}$$

*As the final result, we obtain normalized decision matrices Z<sup>k</sup>* (*k* = 1, 2, . . . , *K*)*:*

$$Z^k = \begin{array}{c} DM\_k \\ A\_1 \\ \vdots \\ A\_{m} \end{array} \quad \left( \begin{array}{cccc} \mathcal{C}\_1 & \mathcal{C}\_2 & \cdots & \mathcal{C}\_n \\ z\_{11}^k & z\_{12}^k & \cdots & z\_{1n}^k \\ & z\_{21}^k & z\_{22}^k & \cdots & z\_{2n}^k \\ \vdots & \vdots & \ddots & \vdots \\ \vdots & \vdots & \ddots & \vdots \\ z\_{m1}^k & z\_{m2}^k & \cdots & z\_{mn}^k \end{array} \right) \tag{40}$$

**Step 2.** The calculation of the weighted normalized individual matrices *V<sup>k</sup>* (*k* = 1, 2, . . . , *K*):

$$V^k = \begin{array}{c} DM\_k \\ A\_1 \\ \vdots \\ A\_m \end{array} \quad \begin{array}{c} \mathbb{C}\_1 & \mathbb{C}\_2 & \cdots & \mathbb{C}\_n \\ v\_{11}^k & v\_{12}^k & \cdots & v\_{1n}^k \\ \hline \ v\_{21}^k & v\_{22}^k & \cdots & v\_{2n}^k \\ \vdots & \vdots & \ddots & \vdots \\ v\_{m1}^k & v\_{m2}^k & \cdots & v\_{mn}^k \end{array} \tag{41}$$

where:

$$w\_{ij}^k = w\_j z\_{ij}^k = \left[ w\_j \underline{z}\_{ij}^k, w\_j \overline{z}\_{ij}^k \right] \tag{42}$$

and *wj* (*j* = 1, 2, . . . , *n*) are the objective criteria weights obtained in Stage 2. **Step 3.** The construction, for each alternative *Ai* (*i* = 1, 2, . . . , *m*), of the matrix *A<sup>i</sup>* :

$$A^i = \begin{array}{c} A\_i \\ DM\_1 \\ \vdots \\ DM\_K \end{array} \quad \left( \begin{array}{ccccc} \mathbb{C}\_1 & \mathbb{C}\_2 & \cdots & \mathbb{C}\_n \\ \upsilon^1\_{i1} & \upsilon^1\_{i2} & \cdots & \upsilon^1\_{in} \\ \upsilon^2\_{i1} & \upsilon^2\_{i2} & \cdots & \upsilon^2\_{in} \\ \vdots & \vdots & \ddots & \vdots \\ \upsilon^K\_{i1} & \upsilon^K\_{i2} & \cdots & \upsilon^K\_{in} \end{array} \right) . \tag{43}$$

**Step 4.** The determination of the PIS (*A*+):

$$A^{+} = \begin{array}{c} DM\_1 \\ DM\_2 \\ \vdots \\ DM\_K \end{array} \quad \left( \begin{array}{ccccc} \mathcal{C}\_1 & \mathcal{C}\_2 & \cdots & \mathcal{C}\_n \\ \upsilon\_1^{1+} & \upsilon\_2^{1+} & \cdots & \upsilon\_k^{1+} \\ \upsilon\_1^{2+} & \upsilon\_2^{2+} & \cdots & \upsilon\_n^{2+} \\ \vdots & \vdots & \ddots & \vdots \\ \upsilon\_1^{K+} & \upsilon\_2^{K+} & \cdots & \upsilon\_n^{K+} \end{array} \right) \tag{44}$$

where *vk*<sup>+</sup> *<sup>j</sup>* <sup>=</sup> max*<sup>i</sup> <sup>v</sup><sup>k</sup> ij* for *j* = 1, 2, . . . , *n* and *k* = 1, 2, . . . , *K* and of NIS (*A*−):

$$A^{-} = \begin{array}{c} DM\_1 \\ DM\_2 \\ \vdots \\ DM\_K \end{array} \quad \left( \begin{array}{ccccc} \mathbb{C}\_1 & \mathbb{C}\_2 & & \cdots & \mathbb{C}\_n \\ \upsilon\_1^{1-} & \upsilon\_2^{1-} & & \cdots & \upsilon\_k^{1-} \\ \upsilon\_1^{2-} & \upsilon\_2^{2-} & & \cdots & \upsilon\_n^{2-} \\ \vdots & \vdots & \ddots & \vdots \\ \upsilon\_1^{K-} & \upsilon\_2^{K-} & & \cdots & \upsilon\_n^{K-} \end{array} \right) \tag{45}$$

where *vk*<sup>−</sup> *<sup>j</sup>* <sup>=</sup> min*<sup>i</sup> <sup>v</sup><sup>k</sup> ij* for *j* = 1, 2, . . . , *n* and *k* = 1, 2, . . . , *K*. **Step 5.** The calculation of the distance of each matrix *A<sup>i</sup>* , representing the alternative *Ai* (*i* = 1, . . . , *m*), from the PIS:

$$d\_{i}^{A+} = \sqrt{\sum\_{k=1}^{K} \sum\_{j=1}^{n} \left[ \left( \underline{v}\_{ij}^{k} - \underline{v}\_{j}^{k+} \right)^{2} + \left( \overline{v}\_{ij}^{k} - \overline{v}\_{j}^{k+} \right)^{2} \right]},\tag{46}$$

and from the NIS:

$$d\_i^{A-} = \sqrt{\sum\_{k=1}^{K} \sum\_{j=1}^{n} \left[ \left( \underline{\mathbf{v}}\_{ij}^{k} - \underline{\mathbf{v}}\_{j}^{k-} \right)^2 + \left( \overline{\mathbf{v}}\_{ij}^{k} - \overline{\mathbf{v}}\_{j}^{k-} \right)^2 \right]}. \tag{47}$$

**Step 6.** The calculation of the coefficients *RCC<sup>A</sup> <sup>i</sup>* (*i* = 1, 2, . . . , *m*) of relative closeness to the PIS for each alternative *Ai* (*i* = 1, . . . , *m*), using the following formula:

$$R\mathbb{C}C\_i^A = \frac{d\_i^{A-}}{d\_i^{A-} + d\_i^{A+}}.\tag{48}$$

**Step 7.** The ranking of alternatives in descending order, using *RCC<sup>A</sup> <sup>i</sup>* , and the determination of the best one.

#### **4. A Numerical Example and Results**

The approach proposed in Section 3 will now be illustrated with a numerical example, taken from [38], related to the evaluation of the authorities of a university in China. The set of alternatives {*A*1, *A*2, *A*3} consists of the president and two vice presidents, who are evaluated by teams of teachers, *DM*1, researchers, *DM*2, and undergraduates, *DM*3. The DMs evaluate the presidents with respect to leadership, *C*1, performance, *C*2, and style of work, *C*3, using a point scale from 0 to 100. The team ratings are represented by INs, where the lower end is the minimum and the upper end is the maximum ratings among the group members. The individual decision matrices are presented in Table 1.


**Table 1.** Individual decision matrices.

The first main step of the proposed approach is to determine the objective criteria weights, as described in Stage 2 of Section 3. The individual decision matrices are normalized (see Table 2) and then transformed into matrices of criteria (see Table 3). Next, for each criterion matrix, the entropy and diversification vectors are determined (see Tables 4 and 5). Using the diversification vectors, we construct a diversification matrix, which is the basis for calculating the objective criteria weights using the interval TOPSIS method. Table 6 presents reference points—in this case, the MIC and LIC. After calculating the distance of each row of the diversification matrix from the MIC and LIC, the RCCs are calculated

(see Table 7). These coefficients, after normalization, are the objective criteria weights (see Table 7 and Figure 3). In our example, we obtain the following vector:

$$w = (0.3049, \ 0.4372, \ 0.2579).$$

**Table 2.** Normalized individual decision matrices for the calculation of criteria weights.


**Table 3.** Matrices for each criterion.


#### **Table 4.** Vectors of entropy.


**Table 5.** Vectors of diversification.


#### **Table 6.** MIC and LIC.



**Table 7.** Objective criteria weights.

**Figure 3.** Objective criteria weights.

The second main step of the proposed approach is to use an extension of the TOPSIS method for GDM without the aggregation of individual matrices, as described in Stage 3 of Section 3. The individual decision matrices (see Table 1) are normalized (see Table 8) using Equation (38) and then Equation (39). Using objective criteria weights (see Table 7), we calculate the weighted normalized decision matrices (see Table 9). These matrices are the basis for constructing the matrix for each alternative (see Table 10) of the form (43). Now, we apply the extended TOPSIS method for the matrices of alternatives for ranking the alternatives. Table 11 presents reference points—in this case, the PIS and NIS. Finally, the distances of the alternatives from the PIS and NIS and the RCCs are calculated (see Table 12). Based on these coefficients, the ranking of the alternatives is as follows:

$$A\_3 \prec A\_1 \prec A\_2$$

where " ≺ " means "inferior to" (see Table 12 and Figure 4). It means that the highest rating is given to the vice president, *A*2. The symbol *J* in Table 12 represents the normalized RCCs.


**Table 8.** Normalized individual decision matrices for the TOPSIS method.


**Table 9.** Weighted normalized individual decision matrices for the TOPSIS method.

**Table 10.** Matrices of alternatives.


#### **Table 11.** PIS and NIS.


**Table 12.** The ranking of the alternatives—*R*.


**Figure 4.** The ranking of the alternatives.

#### **5. Comparison of the Proposed Method with Other, Similar Approaches**

In the following, the approach proposed in Section 3 will be compared with other, similar approaches. In practice, the most common methods for GDM use a certain operator to aggregate the individual decision matrices, given by Equation (21), into a group matrix *X* of the form Equation (7), which is the starting point for the ranking of alternatives. To compare the results obtained by the proposed method (*PM*), we use the following operators:

• *AM*—arithmetic mean, defined by:

$$\mathbf{x}\_{ij} = \frac{1}{K} \sum\_{k=1}^{K} \mathbf{x}\_{ij}^{k} = \left[ \frac{1}{K} \sum\_{k=1}^{K} \underline{\mathbf{x}}\_{ij}^{k}, \frac{1}{K} \sum\_{k=1}^{K} \overline{\mathbf{x}}\_{ij}^{k} \right];$$

• *GM*—geometric mean, defined by:

$$\boldsymbol{x}\_{ij} = \left(\prod\_{k=1}^{K} \boldsymbol{\underline{\boldsymbol{x}}\_{ij}} \boldsymbol{\underline{\boldsymbol{x}}\_{ij}^{k}}\right)^{\frac{1}{K}} = \left(\left(\prod\_{k=1}^{K} \boldsymbol{\underline{\boldsymbol{x}}\_{ij}} \boldsymbol{\underline{\boldsymbol{x}}\_{ij}^{k}}\right)^{\frac{1}{K}}, \left(\prod\_{k=1}^{K} \boldsymbol{\overline{\boldsymbol{x}}\_{ij}^{k}} \boldsymbol{\overline{\boldsymbol{x}}\_{ij}^{k}}\right)^{\frac{1}{K}}\right);$$

• *WM*—weighted mean, defined by:

$$\mathbf{x}\_{i\bar{j}} = \sum\_{k=1}^{K} \lambda\_k \mathbf{x}\_{i\bar{j}}^k = \left(\sum\_{k=1}^{K} \lambda\_k \underline{\mathbf{x}}\_{i\bar{j}}^k, \sum\_{k=1}^{K} \lambda\_k \overline{\mathbf{x}}\_{i\bar{j}}^k\right)^2$$

where *λ<sup>k</sup>* are weights that determine the importance of the DMs, such that *λ<sup>k</sup>* ∈ [0, 1] and ∑*<sup>K</sup> <sup>k</sup>*=<sup>1</sup> *λ<sup>k</sup>* = 1.

In the *WM* method, the vector of DM weights *λ* = (0.2661, 0.3573, 0.3766) is determined by the method proposed by [38]. Next, based on the matrix *X*, we determine the objective criteria weights using the method proposed by Lotfi and Fallahnejad [35]. In this case, the criteria weights are in the form of INs, so we do not compare them with the criteria weights obtained by the proposed method described in Stage 2 of Section 3 and presented in Table 7. To obtain the ranking of the alternatives, we use the normalization method proposed by Jahanshahloo et al. [27]; the PIS and NIS are determined using Equations (5) and (6), whereas the distances of the alternatives from the PIS and NIS are calculated using Equations (46) and (47), where *K* = 1. Because the analyzed methods are significantly different, to compare the final results we use the indicator *J* instead of the RRCs. Table 13 and Figure 5 present the results obtained. We can notice that all the analyzed methods indicated alternative *A*<sup>2</sup> as the best one, and the obtained values of the indicator *J* are similar. On the other hand, methods that use an aggregation operator give a different ranking than the proposed method, of the form:

$$A\_1 \prec A\_3 \prec A\_2$$

where alternatives *A*<sup>1</sup> and *A*<sup>3</sup> are swapped.

**Table 13.** Comparison of results.


**Figure 5.** Comparison of results.

#### **6. Conclusions**

This paper presents a new extension of the TOPSIS method for GDM, using INs. It is an alternative to methods based on the aggregation of individual matrices. It uses the transformation of decision matrices into criteria matrices to determine objective criteria weights, while it uses alternatives matrices to create rankings of alternatives. The numerical example shows that the results obtained by the proposed method differ from the results obtained by the methods based on the aggregation of individual matrices using the arithmetic mean, geometric mean, and weighted mean (with weights reflecting the importance assigned to the DMs).

However, it is worth noting that the proposed method has some limitations, as it uses data in the form of INs. This implies the necessity of extending the proposed method to other types of imprecise data, which will be the subject of further research. Furthermore, the proposed method should be extended by taking into account the subjective criteria weights and the subjective and objective weights of the DMs, to ensure that all key elements in the decision-making process are taken into account.

**Funding:** The work was performed in the framework of project WZ/WI-IIT/1/2020 at the Bialystok University of Technology and financed by the Ministry of Science and Higher Education.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The author would like to thank the editor of the *Entropy* journal and the two anonymous reviewers for their valuable comments and suggestions.

**Conflicts of Interest:** The author declare no conflict of interest.

#### **Abbreviations**


#### **References**

