**2. Literature Review**

Analytical mathematical optimization problems were solved in as early as the 17th century. The first solution proposed investigating the problem of finding the minimum/maximum and was described by P. Fermat (XVII). Newton developed the method of fluxions. The technique was rediscovered and published in the paper "New Method for the Greatest and the Least" by G. W. Leibniz in 1684. Further, efforts exerted by Euler and Lagrange led to working out solutions to extreme tasks. In 1824, Fourier created the first algorithm for solving linear arithmetic constraints [18]. This algorithm made further advances in the field, such as the main duality theorem, the Farkas lemma, the Motzkin transfer theorem and others [19]. The traditionally employed model of optimization includes linear programming, sequential quadratic programming, nonlinear programming, and dynamic programming [20]. In 1939, the first formulation of the linear programming problem and the method for solving this problem were proposed by Leonid Kantorovich. In 1947, Danzig created the simplex method that was e ffectively used to solve linear programming problems [21]. Derivative-based stochastic optimization began with a seminal paper by Robbins and Monro (1951) that launched the entire field [22]. Richard Bellman developed the dynamic programming method in the 1950s [23].

Decision-making methods based on optimality were introduced by Pareto in 1896 and applied to a wide range of problems. The Multi-Objective Evolutionary Algorithm (MOEA) [24] is used to find the optimal Pareto solutions for specific problems [25]. Keeney and Rai ffa [26] and Fishburn [27] introduced the Multi-Attribute Value Theory (MAVT), the Multi-Attribute Value Analysis (MAVA) and Multi-Attribute Utility Theory (MAUT) methods. Data envelopment analysis (DEA), introduced by Charnes et al., is a linear programming method for measuring the e fficiency of multiple decision-making units by analysing the problems of multiple inputs and outputs [28].

Multiple criteria decision-making methods evolved from operations research theory by solving problems such as the development of computational and mathematical tools to support the subjective assessment of performance criteria by decision-makers [29]. MADM, as a discipline, has a relatively short history of approximately 30 years. Its role has increased significantly in di fferent application areas along with the development of new methods and improved old methods in particular.

A work by Hwang and Yoon presented a plethora of methods for solving MADM problems [7]: Methods for Cardinal Preference of Attribute over Linear Assignment method [30], Simple Additive Weighting (SAW) method [31], Hierarchical Additive Weighting method, ELECTRE method, and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [7]. The most familiar and commonly used is the SAW method reflecting the idea of multi-criteria methods—merging criterion values and their weights into a single value [32].

Peng and Wang proposed the concept of hesitant uncertain linguistic Z-numbers (HULZNs) and presented the Multi-Criteria Group Decision-Making (MCGDM) method by integrating power operators employing the Vlse Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) [5] model. Peng and Wang merged the Multi-Objective Optimization by Ratio Analysis plus the Full Multiplicative From (MULTIMOORA) and power aggregation operators in order to create a comprehensive decision model for MCGDM problems with Z-numbers [33]. Outranking ELECTRE [34] and PROMETHEE [35] methods were described in the publication on multiple criteria decision analysis by Belton and Stewart in 2001 [10]. Opricovic and Tzeng conducted a comparative analysis of VIKOR and TOPSIS methods in 2004 [5,36].

New methods have recently emerged that are actively used in di fferent fields of science: Weighted Aggregated Sum Product Assessment (WASPAS) [37], Complex Proportional Assessment Method (COPRAS) [38], Multi-Objective Optimization by Ratio Analysis (MOORA) [39], COPRAS grey (COPRAS-G), fuzzy additive ratio assessment (ARAS-F) [40], ARAS grey (ARAS-G) and MULTIMOORA (MOORA plus the full multiplicative form) [41,42], KEmeny Median Indicator Ranks Accordance (KEMIRA) [43], ARAS [44], and newest extensions of the ELECTRE [45] and PROMETHEE [46,47] methods. The examples of partial aggregation methods include Step-Wise Weight Assessment Ratio Analysis (SWARA) [48] and factor relationship (FARE).

Criterion weights are one of the components of MCMD methods and therefore have a strong impact on the final result [15]. For defining criterion weights, subjective evaluation is the most frequently applied technique when experts examine the significance of criteria, although objective and generalized estimates are known [49]. Weights can be set directly or using weighting methods such as Analytic Hierarchy Process (AHP) [50,51], Fuzzy Analytic Hierarchy Process (FAHP) [52,53], SWARA [54], Criterion Impact LOSs (CILOS) [55], Integrated Determination of Objective Criteria Weights (IDOCRIW) [14,56], etc. Recalculation of the weights of criteria under the Bayes theorem is proposed in the paper [56]. Regardless of the method, the principles of evaluation remain to take the position that the weight of the most important criterion is the highest. It was agreed that the sum of all weights should be equal to 1 [1]. Any measurement scale may be used for evaluations.

Based on a study by Sabaei et al., the most common decision managemen<sup>t</sup> methods used in Scopus database publications are AHP, ELECTRE, and PROMETHEE [57]. The early 1990s witnessed the shift of focus toward methods that consider indi fference and ensure the transparency of analysis processes [58]. An analogous study conducted by Mardani et al. aimed at determining the popularity of decision-making methods. The results showed that hybrid MADM and fuzzy MADM approaches (27.92%) were used more often than other methods. The most commonly used methods are AHP and fuzzy AHP [59] (24.87%), ELECTRE, fuzzy ELECTRE [60], MCDA and MCA (12.69%), and TOPSIS, fuzzy TOPSIS [61], PROMETHEE and fuzzy PROMETHEE [62] (5.08%) [1].

Mardani et al. carried out research and published the obtained material in the paper "Multiple Criteria Decision-Making Techniques and Their Applications," (i.e., a literature review for the period from 2000 to 2014 [2]). Another paper by Mardani et al. reviewed decision-making methods from the field of energy managemen<sup>t</sup> for the period 1995–2015 [1].

The concept of sensitivity analysis in decision theory means the e ffective use and implementation of quantitative decision models, the purpose of which is to assess the stability of an optimal solution under changes in parameters, the impact of the lack of controllability of specific parameters and the need for the precise estimation of parameter values [63]. The first significant works on sensitivity analysis in the field of decision-making were done by Evans [63], who formulated the concepts of sensitivity analysis in linear programming to develop a formal approach applicable to classical decision-theoretic problems [64] and presented two simple computational procedures for sensitivity analysis of additive multi-attribute value models that yielded variations in attribute weights. Insua [65] developed a conceptual framework for sensitivity analysis in discrete multi-criteria decision-making, which allowed simultaneous variations in judgmental data and applied to many paradigms for decision analysis. Janssen [66] discussed the sensitivity of the rankings of alternatives to the overall uncertainty in scores, and priorities were analyzed using the Monte Carlo approach. Butler [67] presented a simulation approach allowing simultaneous changes in the weights and generating results that could be easily analyzed to provide insights into multi-criteria model recommendations statistically.

Wolters and Mareschal [68] proposed three novel types of sensitivity analysis focused on and elaborated for the PROMETHEE methods. Masuda [69] studied the sensitivity problems of the AHP method. In his work, he concentrated on how changes in the entire columns of the decision-making matrix might a ffect the values of the composite priorities of alternatives. Triantaphyllou [70] presented

a methodology for performing a sensitivity analysis of the weights of decision criteria and identifying the performance values of the alternatives expressed in terms of decision criteria. The estimation of the effect/impact of uncertainty in the SAW method was performed by Podvezko [71], who determined the points of varying ranges of criterion weights of the investigated process, evaluated compatibility level and stability of expert opinions and assessed the e ffect of uncertainty on ranking comparable objects employing the imitation method. The impact of varying weights on the final result in the SAW method was studied by Zavadskas [72] and Memariani [73]. The influence of the elements of the decision matrix on the final ranking result was analyzed by Alinezhad [74]. The e ffect of the importance of criterion weights on the results of the TOPSIS method was studied by Yu [75] and Alinezhada [76]. Misra focused on a comparison of AHP, Decision-Making Trial and Evaluation Laboratory (DEMATEL), COPRAS, and TOPSIS methods [77]. Podvezko [32] compared SAW, TOPSIS and COPRAS methods. Moghassem [78] increased and decreased all criterion weights by 5%, 10%, 15%, and 20% in analyzing the sensitivity of TOPSIS and VIKOR. Hsu conducted the sensitivity analysis of TOPSIS by increasing and decreasing the top three weights by 10% [79].

### **3. MADM Methods as a Component of Mathematics-Based Optimization Techniques**

To formulate the optimization problem, the paper presents a set of optimized elements and the measure of goodness of its elements (quality estimates).

The optimization problem takes the form of

$$\underset{\mathbf{x}\in D}{\text{opt}}f(\mathbf{x}),\tag{1}$$

where *f*(*x*) : *D* → *Y* is the objective function or criterion; *D* is the set or permissible area of the optimized objects; and *opt* is the minimum or maximum value of function *f*(*x*).

The literature provides a number of di fferent classifications of optimization problems. Typically, specific decision-making methods are created for each category of problems according to the characteristics of that particular class. Weights do not vary in SAW, TOPSIS, COPRAS, MOORA and PROMETHEE methods. Weights are determined using subjective or objective weighting methods. The number of comparable alternatives is finite in these methods.

MADM methods can be presented as a mathematical optimization problem as follows:

$$d\_{opt}^{\mathbb{V}}(r) = \arg\max\_{i} f^{\mathbb{V}}(r, \omega), i = 1, \dots, n,\tag{2}$$

where ν is the number of the MADM method. The merit of alternatives *i* = 1, ... , *n* is evaluated according to criteria *j* = 1, ... , *m*, and the values are defined as *r* = *rij* . The influence of criteria on the evaluation result is di fferent, and therefore the vector ω = ( <sup>ω</sup>*j*), *j* = 1, ... , *m*, of the weights of criteria is determined, thus defining the importance of criteria.

*3.1. SAW (Simple Additive Weighting) Method (*ν = *1)*

$$\boldsymbol{i}\_{opt}^{1}(\boldsymbol{r}) = \arg\max\_{i} \sum\_{j=1}^{m} \left( \boldsymbol{w}\_{j}(\widetilde{\boldsymbol{r}\_{ij}}) \right) \tag{3}$$

where the values of 8*rij* are normalized according to the formula:

$$
\widetilde{r\_{ij}} = \frac{r\_{ij}}{\sum\_{i=1}^{n} r\_{ij}}.\tag{4}
$$

When the values of criteria are multi-dimensional, they are transformed. The values of the maximized criteria are calculated according to the formula:

$$
\overline{r}\_{ij} = \frac{r\_{ij}}{\text{max} \mathbf{r}\_{ij}}.\tag{5}
$$

Then, the highest value of *rij* is equal to 1. The value of minimized criteria *ri* is correspondingly calculated according to the formula:

$$
\overline{r}\_{ij} = \frac{\min r\_{ij}}{r\_{ij}}.\tag{6}
$$

Then, the lowest value of *rij* is equal to 1. For standard criteria, the principle of simple linear scalarization is applied.

*3.2. TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) Method (*ν = *2)*

$$\chi\_{opt}^2(r) = \arg\max\_i \frac{\sqrt{\sum\_{j=1}^m \left(\alpha\_j(\overline{r\_{ij}} - \overline{r\_j}^\*)\right)^2}}{\sqrt{\sum\_{j=1}^m \left(\alpha\_j(\overline{r\_{ij}} - \overline{r\_j}^\*)\right)^2} + \sqrt{\sum\_{j=1}^m \left(\alpha\_j(\overline{r\_{ij}} - \overline{r\_j}^\*)\right)^2}}. \tag{7}$$

The method refers to vector data normalization:

$$
\widetilde{r\_{ij}} = \frac{r\_{ij}}{\sqrt{\sum\_{i=1}^{n} r\_{ij}^2}} \,\tag{8}
$$

where 8*rij* is the normalized value of the *j*th criterion for the *i*th alternative.

The vector of the best *R*<sup>+</sup> value and the worst *R*− value of criteria (ideal alternative) are calculated as

$$\begin{aligned} R^{+} &= \{ \overleftarrow{r}\_1^{+}, \overleftarrow{r}\_2^{+}, \dots, \overleftarrow{r}\_m^{+} \} = \{ (\max\_i \overleftarrow{r}\_{ij} / j \in I\_1), (\min\_i \overleftarrow{r}\_{ij} / j \in I\_2) \}, \\ R^{-} &= \{ \overleftarrow{r}\_1^{-}, \overleftarrow{r}\_2^{-}, \dots, \overleftarrow{r}\_m^{-} \} = \{ (\min\_i \overleftarrow{r}\_{ij} / j \in I\_1), (\max\_i \overleftarrow{r}\_{ij} / j \in I\_2) \}. \end{aligned} \tag{9}$$

where *J*1 is a set of indices of the maximized criteria, *J*2 is a set of indices of the minimized criteria, and 8*r*<sup>−</sup> *j*(8*r* + *j*) is the worst (best) value of the *j*th criterion.

The basic principle of the method is to find an alternative at the shortest overall distance from the best values of criteria and the maximum distance from the worst values. The method does not require the rearrangemen<sup>t</sup> of the minimized (maximized) criteria to the maximized (minimized) ones.

*3.3. PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation) Method (*ν = *3)*

$$\begin{split} \mathbf{i}\_{\text{opt}}^{3}(r) &= \arg\max\_{i} \mathbf{r}\_{i} = \arg\max\_{i} \left( \mathbf{F}\_{i}^{+} - \mathbf{F}\_{i}^{-} \right) = \\ &= \arg\max\_{i} \left( \sum\_{\mathcal{S}=1}^{n} \pi \left( A\_{i} A\_{\mathcal{S}} \right) - \sum\_{\mathcal{S}=1}^{n} \pi \left( A\_{\mathcal{S}} A\_{i} \right) \right) = \\ &= \arg\max\_{i} \left( \sum\_{\mathcal{S}=1}^{n} \sum\_{j=1}^{m} \omega\_{j} p\_{\mathcal{h}} \left( d\_{j} \{ A\_{i} A\_{\mathcal{S}} \} \right) - \sum\_{\mathcal{S}=1}^{n} \sum\_{j=1}^{m} \omega\_{j} p\_{\mathcal{h}} \left( d\_{j} \{ A\_{\mathcal{S}} A\_{i} \} \right) \right), \end{split} \tag{10}$$

where *i* = 1, 2, ... , *n*; *m j*=1 <sup>ω</sup>*j* = 1; *dj Ai*, *Ag* = *rij* − *rgj* is the di fference of alternatives *Ai* and *Ag* of inequality values *rij* and *rgj* of the *j*th criterion *Rj*; and *ph*(*d*) = *ph dj Ai*,*Ag* is the value of the *h*th priority function for the selected *j*th criterion.

The PROMETHEE method uses the basic ideas of other methods like combining the values of weights and normalized criteria into a single estimate (SAW method) and the pairwise comparison of criteria (AHP method). Instead of the normalized criteria values, the value of the priority function *ph*(*d*), 0 ≤ *ph*(*d*) ≤ 1 is used, and all possible pairs of alternatives for each of the criteria are compared with each other. A higher value of *ph*(*d*) corresponds to a better alternative; if the di fference *d* is lower than the established critical value *q*, then *ph*(*d*) = 0. If *d* is greater than the maximum limit *s* for the values of criteria, then *ph*(*d*) = 1.

In practice, six (h = 6) functions of priorities *ph*(*d*) are applied [3,80].

The priority function of the usual criterion is equal to

$$p\_1(d) = \begin{cases} \ 0, \ when \ d \le 0 \\\ 1, \ when \ d > 0. \end{cases} \tag{11}$$

The function chart is shown in Figure 1a.

The priority function of the U-shape criterion is equal to

$$p\_2(d) = \begin{cases} \ 0, \text{ when } d \le q \\\ 1, \text{ when } d > q. \end{cases} \tag{12}$$

The function chart is shown in Figure 1b.

The priority function of the V-shape criterion (linear priority) is equal to

$$p\_{\mathfrak{P}}(d) = \begin{cases} 0, \text{ when } d \le 0\\ \frac{d}{s}, \text{ when } 0 < d \le s\\ 1, \text{ when } d > s. \end{cases} \tag{13}$$

The function chart is shown in Figure 1c.

The priority function of the level criterion is equal to

$$p\_4(d) = \begin{cases} 0, \text{ when } d \le q \\\ 0.5, \text{ when } q < d \le s \\\ 1, \text{ when } d > s. \end{cases} \tag{14}$$

The function chart is shown in Figure 1d.

The priority function of the V-shape with indifference criterion is equal to

$$p\_5(d) = \begin{cases} 0, \text{ when } d \le q \\ \frac{d-q}{s-q'}, \text{ when } q < d \le s \\\ 1, \text{ when } d > s. \end{cases} \tag{15}$$

The function chart is shown in Figure 1e.

The priority function of the Gaussian criterion is equal to

$$p\_\delta(d) = \begin{cases} 0, \text{ when } d \le 0\\ 1 - \exp\left(-\frac{d^2}{2\sigma^2}\right), \text{ when } d > 0. \end{cases} \tag{16}$$

The function chart is shown in Figure 1f.

As mentioned above, PROMETHEE, similarly to the other multi-criteria decision methods, applies the idea of the SAW method instead of the normalized values 8*rij* of criteria and uses the values of the functions *ph*(*d*) of specifically selected priorities, where the argumen<sup>t</sup> *d* is the difference between the values of the criterion.

**Figure 1.** Function charts of criterion priorities: (**a**) function chart of the priorities of the usual criterion; (**b**) function chart of the priorities of the U-shape criterion; (**c**) function chart of the priorities of the V-shape criterion; (**d**) function chart of the priorities of the level criterion; (**e**) function chart of the priorities of the V-shape with indifference criterion; (**f**) function chart of the priorities of the Gaussian criterion.

*3.4. COPRAS (Complex Proportional Assessment) Method (*ν = *4)*

$$\mathbf{m}\_{\text{opt}}^4(r) = \arg\max\_i (\sum\_j^m \alpha\_{+j}\widetilde{\tau}\_{+ij} + \frac{\sum\_{i=1}^n \sum\_j^m \alpha\_{-j}\widetilde{\tau}\_{-ij}}{\sum\_j^m \alpha\_{-j}\widetilde{\tau}\_{-ij}\sum\_{i=1}^n \left(\sum\_j^m \alpha\_{-j}\widetilde{\tau}\_{-ij}\right)^{-1}}\tag{17}$$

where <sup>ω</sup>+*j*(<sup>ω</sup>−*j*) are the maximized (minimized) weights of criteria; and 8*<sup>r</sup>*−*ij*(<sup>8</sup>*r*+*ij*) are the normalized values of the minimized (maximized) criteria for each *i*th alternative. The values of the estimates of alternatives are normalized according to Equation (4).

The application of the COPRAS method separately assesses the effect of the minimized and maximized criteria on the result of the carried out evaluation [38,81].

### *3.5. MOORA (Multi-Objective Optimization on the Basis of Ratio Analysis) Method (*ν = *5)*

$$\mathbf{u}\_{\mathrm{opt}}^{\mathcal{S}}(r) = \arg\max\_{i} (\sum\_{j=1}^{\mathcal{S}} \widetilde{r}\_{ij} - \sum\_{j=\mathcal{g}+1}^{m} \widetilde{r}\_{ij}). \tag{18}$$

For the value of 8*rij*, vector normalization according to Equation (8) is applied. The initial version of the MOORA method did not take into account the importance of the criteria expressed in weights. The method calculation principle is the sum of the values of the minimized normalized criteria (from *g* + 1 to *m*) subtracted from the sum of the maximized normalized alternative criteria (from 1 to *g*). For developing the MOORA method, Brauers started using the weights of criteria [39]. The improved MOORA method is applied for calculations.

The presented methods have been selected as some of those most frequently applied in practice. Similarly, other familiar criteria such as VIKOR, ELECTRE, and others for evaluating the MADM method can be presented as objective functions.

### **4. Experimental Application of the Methodology Merging MADM Methods**

The application of a few MADM methods may result in ranking the scale of evaluation results and reported findings, which is not a clear case of what decision should be made. Each method has an individual theoretical basis and logic, and therefore results in differences.

This chapter describes the methodology for merging the results of MADM methods and presents its practical application. The methodology proposes making calculations using several MADM methods and thus merging their results according to the importance of the method for the problem solved into a single value. SAW, COPRAS, TOPSIS, PROMETHEE and MOORA methods are used in the calculations.

To sum up the results of di fferent methods into the single value, normalizing result data beforehand is required. Linear, classical, vector, logarithmic and other normalization techniques are known. Unlike other methods, the results received applying PROMETHEE are both positive and negative numbers. To transform the results of the PROMETHEE method and other MADM techniques to the uniform scale, PROMETHEE result data must be converted into positive values.

### *4.1. Methodology for Merging the Results of MADM Methods*

The weight, representing the importance of the MADM method, is defined as Ω<sup>ς</sup>. The result of the stability of a separate method is defined as *S*ς and is expressed in percentage.

The weights of methods are normalized in the following way:

$$
\Omega\_{\varsigma} = \frac{S\_{\varsigma}}{\sum\_{\varsigma=1}^{\nu} S\_{\varsigma}}, \sum\_{\varsigma=1}^{\nu} \Omega\_{\varsigma} = 1. \tag{19}
$$

The best alternative is established as

$$i\_{\rm opt}(\mu) = \arg\max\_{\varsigma=1}^{\mathbb{V}} \Omega\_{\varsigma} \cdot \mu\_{i,\varsigma} \,. \tag{20}$$

where μ*i*,<sup>ς</sup> is the normalized result of the ςth MADM method of the *i*th alternative.

To merge the results of di fferent methods into a single value, normalizing data on the obtained results is required beforehand. Linear, classical, vector, logarithmic and other normalization techniques are known. Unlike other methods, the results received applying PROMETHEE are both positive and negative numbers. To transform the results of the PROMETHEE and other MADM methods to the uniform scale, first, PROMETHEE result data must be converted into positive values.

For handling negative values and making the scales of the results of other methods equal, Wietendorf's [82] linear normalization rearranging data in the range of [0, 1] is suitable:

$$\mathbf{x}\_{tr} = \frac{\mathbf{x} - \mathbf{x}\_{min}}{\mathbf{x}\_{max} - \mathbf{x}\_{min}} \mathbf{y} \tag{21}$$

where *xtr* is the normalized result of the method and *xtr* ∈ [0, 1], *x* is the initial obtained result of the method, *xmin* is the lowest value of the results of methods, and *xmax* is the highest value of the results of methods.

Another method for making data on MADM results equal in order to employ classical normalization [83] is as follows:

$$
\widetilde{\mu}\_{i\zeta} = \frac{\mu\_{i,\zeta}}{\sum\_{i=1}^{n} \mu\_{i,\zeta}}.\tag{22}
$$

Thus, the results of the PROMETHEE method are transformed into positive numbers beforehand. The transformed value of the evaluation result takes the form of 8*Fi*, *i* = 1, ... , *n*. The results of *Fi* obtained applying the PROMETHEE method are sorted in ascending order. The lowest result of the transformed method is equal to 8*F*1 = 1. Other transformed values are calculated as follows:

$$
\overline{F}\_{i+1} = \overline{F}\_i + F\_{i+1} - F\_i, \; i = 1, \dots, n-1. \tag{23}
$$

### *4.2. Algorithm for Defining MADM Stability*

Any mathematical model or method can be applied in practice provided it is stable in terms of the applied parameters. The stability of MADM is verified by employing the statistical simulation method using a sequence of random numbers from the given distribution.

The algorithm for evaluating the stability of the MADM method is presented in Figure 2.

**Figure 2.** The algorithm for evaluating the stability of the Multi-Attribute Decision-Making (MADM) method.

MADM method ν determines the best alternative *i* of the initial data and fixes the number of this alternative *Iopt*. Verifying the stability of multi-criteria methods brings slight changes in vector data in the initial judging matrix (i.e., expert evaluations *rij* and weights *wj*). The calculation is made with the newly received values *newrij* and *newwj* using the MADM method, thus determining the number of the best alternative *newIopt*. The counter *sk* captures the amount of *newIopt* recurrence with the initial *Iopt*. As mentioned in the introduction, a sufficient number of cycles to evaluate the stability of the method to the nearest 0.1 was selected with *Y* = 105.

The stability coefficient that fixes the frequency of the recurrence of the best initial alternative is calculated by changing preliminary data. The method is more important for the result of the problem when the stability coefficient is higher.

When no information on the distribution of parameters for MADM methods is available, the uniform distribution is used for generating random values of *<sup>x</sup>*ς from the range [*X*, *X*]:

$$
\overline{X}\_{\prec} = \underline{X} + \overline{q}\_{\prec} (\overline{X} - \underline{X}) \,\tag{24}
$$

where 8*q*ς [0, 1].

The random values of alternate estimates and criterion weights are generated by slightly changing initial data *rij* and *wi* by 10% when 8*q*ς ∈ [0, 1]:

$$\begin{array}{l} \max\_{r\_{ij}} = \min r\_{ij} + \overline{q\_{\succ}} \cdot (\max\_{i} r\_{ij} - \min\_{i} r\_{ij}),\\ \text{new}\_{w\_{i}} = \min w\_{i} + \overline{q\_{\succ}} \cdot (\max w\_{i} - \min w\_{i}). \end{array} \tag{25}$$

The variation limits [*min rij*, *max rij*] of alternative estimates *rij* are determined as

$$\begin{aligned} \max r\_{i\bar{j}} &= r\_{i\bar{j}} + 0.1 \cdot r\_{i\bar{j}\bar{\prime}} \\ \min r\_{i\bar{j}} &= r\_{i\bar{j}} - 0.1 \cdot r\_{i\bar{j}} \end{aligned} \tag{26}$$

Accordingly, the variation limits [*min wi*, *max wi*] of criterion weights *wi* are equal to

$$\begin{aligned} \max w\_{\bar{l}} &= w\_{\bar{l}} + 0.1 \cdot w\_{\bar{l}\_{\bar{l}}} \\ \min w\_{\bar{l}} &= w\_{\bar{l}} - 0.1 \cdot w\_{\bar{l}}. \end{aligned} \tag{27}$$

By applying the algorithm for verifying the stability of the MADM method (Figure 2), the stability of all multi-criteria decision-making methods described in this paper is checked. The higher the frequency of the reoccurrence of the best alternative, the more stable the method. The proposed method considers the uncertainty of data on expert evaluation and therefore decreases the level of the subjectivity of the conducted evaluation. The evaluation carried out by applying multiple MADM methods allows selecting the result of the most stable method or merging the results of several methods into a single value.

### *4.3. Experimental Application of Merging the Results of MADM Methods*

To illustrate the application of the method described in the paper, an example in which the estimates of alternatives differ slightly from each other has been chosen. The experts assessed the quality of the course units taught according to six criteria [17]. The descriptions of criteria, as well as the estimates of weights and course units, are given in Table 1. The mean of alternative estimates (i.e., course units), is in the range of [9.03, 9.34].


**Table 1.** Data on assessing course units.

Regarding the initial data (Table 1), the calculation has been conducted by applying the SAW (Equation (3)), TOPSIS (Equation (7)), MOORA (Equation (18)), COPRAS (Equation (17)) and PROMETHEE (Equation (10)) methods. Since all criteria are maximized in the problem solved (Table 1), the calculation of the SAW and COPRAS methods coincides [34]. Thus, only the SAW method will be mentioned below in the paper. The calculations of the PROMETHEE method used the function chart of the priority of the V-shape with indifference criterion (Equation (15)) with parameters *q* {0.25; 1.75; 0.3; 0.25; 0.2} and *s* {1.2; 2; 0.6; 1.5; 0.75}. The parameters *q* and *s* were not changed, testing the stability of the PROMETHEE method.

The final ranked results are presented in Figure 3. The best alternative is ranked 1, whereas the worst-rated alternative takes 5. Calculations revealed that the results of the methods differ: SAW method results {0.2022; 0.2001; 0.2020; 0.1958; 0.1999}, TOPSIS {0.6029; 0.5272; 0.6004; 0.3640; 0.5413; 3}, MOORA {4.2080; 4.1196; 4.2103; 3.9963; 4.1316}, PROMETHEE {0.2127; −0.3252; 0.0889; −0.2309; 0.2544} [84]. Therefore, it is not possible to unambiguously identify the best course unit from the results obtained.

**Figure 3.** The results obtained using MADM methods. SAW: Simple Additive Weighting; TOPSIS: Technique for Order of Preference by Similarity to Ideal Solution; MOORA: Multi-Objective Optimization by Ratio Analysis; PROMETHEE: Preference Ranking Organization Method for Enrichment Evaluation.

According to the algorithm described above, the stability of the following methods has been determined: SAW 30.7%, TOPSIS 30.9%, MOORA 29.3% and PROMETHEE 26.8%.

The stability of all methods is low due to the similarity of the initial data. Even small variations in the initial data have changed ranking of the best alternative. Having applied Equation (19), the weights of methods are calculated: ΩSAW = 0.2608, ΩTOPSIS = 0.2625, ΩMOORA = 0.249, ΩPROMETHEE = 0.2277 (Figure 4). The weights of the methods are slightly different, and the most stable is the TOPSIS method.

**Figure 4.** A comparison of stability determined by applying MADM methods.

In order to merge the results of all methods, their estimates need to be unified. Thus, the MADM results are normalized in the range of [0, 1] (Table 2). Wietendorf's [82] linear normalization is suitable for the results of different scales as well as for the negative values of the PROMETHEE method.


**Table 2.** Normalized MADM result in the range of [0, 1].

Equation (20) is applied in summing up the estimates of the normalized methods considering their weights. The numerical results are presented in Figure 5. A comparison of the obtained results (Figure 5) with data provided in Table 1 shows changes in the findings. The weights of criteria had a significant impact on the result. Compared to the ranked results employing all methods, the merged MADM result matched with that determined by applying the TOPSIS method. The latter method had a higher weight (i.e., importance), in the problem solved.

**Figure 5.** Merging the results of MADM methods following linear normalization.

Table 2 shows that Wietendorf's (Equation (21)) linear normalization has a disadvantage (i.e., zero estimates of alternatives). The weight of the method does not affect the worst-rated alternative as its result is normalized to the zero value.

When the results of two worst-rated alternative methods slightly differ from each other using different normalization, the result may change. Thus, no similar problems are encountered in finding the best alternative.

Another calculation method (i.e., technique for making values equal), involves classical normalization (Equation (22)) and pre-arranging the results of the PROMETHEE method using Equation (23). The transformed positive results of the PROMETHEE method are 1.5379, 1, 1.4141, 1.0943, and 1.5795. Table 3 shows the re-estimation of the methods using classical normalization [84]. The results of the MADM methods merged using Equation (20) are shown in Figure 6.


**Table 3.** Transformed MADM results applying classical normalization.

**Figure 6.** Merging the results of MADM methods following classical normalization.

The numerical results of the initial data (Table 1) and the merged results following classical (Figure 6) and linear (Figure 5) normalization are shown in Figure 7.

**Figure 7.** The results of evaluating alternatives, means of numerical values.

Before comparing the obtained information, the results were normalized so that the sum of all estimates of alternatives should be equal to one. The chart shows that the means of the estimates of the initial data differ slightly from each other. The merged results demonstrate that linear normalization leads to significant variations in the outcomes, which is clearly expressed in the evaluation of the fourth alternative. Differences in the results obtained following classical normalization are not significantly expressed in the chart.

The results expressed in ranks are shown in Figures 8 and 9. These charts indicate the mean ranks of the initial data, the ranks of the results of the merged MADM methods (following linear and classical normalization) and the means of the ranks of the results obtained by employing MADM methods. The best alternative is ranked 1, whereas the worst-rated alternative takes 5.

**Figure 8.** The results of evaluating alternatives, means of the ranked values.

**Figure 9.** A comparison of the results of evaluating alternatives.

The results of the initial data differ from those achieved by evaluating the outcomes of the first and second alternatives. The merged results coincided following linear and classical normalization. The mean values of the results of MADM methods mainly coincided with the merged results of MADM methods. Since the values of the weights of MADM methods Ω are similar to each other (Figure 4), they did not have a significant effect on the final result. The average results of MADM ranks of the first and third alternatives may lead to different interpretations due to their estimates being equal to 1.5 and 2. The combined results have unequivocally identified the best alternative as Alt. 1.

Table 3 shows that the sum of the estimates for each alternative is equal to 1, which facilitates comparing them. A comparison of the ranked results provided in Figures 5 and 6 demonstrates that the employed methods of the linear and classical normalization of MADM results have determined all alternatives equally. For comparing the mean values of the initial estimates with the findings obtained using MADM methods, the ranking results have changed due to the effect of criterion weights.

### **5. Discussion and Conclusions**

The paper has considered MADM methods as an integral part of the mathematical optimization theory. To illustrate the idea, some of the most applicable methods, SAW, TOPSIS, MOORA, PROMETHEE and COPRAS, have been preferred, and their evaluation criteria have been presented as objective functions, although this paper's methodology is not limited to the use of only these methods. Other MADM methods such as VIKOR, ELECTRE, Evaluation Based on Distance from Average Solution (EDAS), etc. can be similarly introduced as objective functions. The forthcoming papers of the author will focus on exploring more extensively the limitations to constraints on the variables of the above-listed and new MADM methods and will concentrate on the properties of the objective functions and their limitations.

The MADM methods introduced in this paper are employed for selecting the best alternative evaluated according to the established criteria. The purpose of classical optimization is analogous to MADM methods presented in the paper, which means finding an optimal solution from several or many possible options. The use of MADM makes sense in comparing alternatives that do not contain any dominant alternatives when considering all evaluation criteria. The data used in the presented MADM methods are not changed by searching for the optimal solution from all available ones. The decision matrix and the vector of criterion weights are static data, and the number of optional alternatives is finite.

Merging the results of the MADM methods in accordance with their importance showed their possibilities in evaluation. There is a large number of MADM methods, and therefore the literature does not provide unambiguous recommendations for the most appropriate one. Therefore, multiple MADM methods are frequently applied in practice. A methodology for merging the results of MADM methods was presented in this paper, based on summing up the normalized MADM results into a single value and considering the methods' stability.

The findings have demonstrated that weights have a significant influence on the result. In order to analyze the influence of the weights of criteria and methods on the obtained result, a problem example was presented in the practical part of the paper, and the averages of evaluating alternatives had little di fference between them. Criterion weights have been found to significantly alter the primary outcomes. The established stability of the applied methods did not di ffer significantly: ΩSAW = 0.2608, ΩTOPSIS = 0.2625, ΩMOORA = 0.249, ΩPROMETHEE = 0.2277. Nevertheless, the influence of the weights of the methods on the result is noticeable. The ranked result obtained employing the TOPSIS method coincided with the ranked composite result, since the TOPSIS method had a greater influence of weight than the rest of the techniques had. The average results of MADM ranks of the first and third alternatives may lead to di fferent interpretations due to their estimates being equal to 1.5 and 2. The combined results have unequivocally identified the best alternative.

Wietendorf's linear normalization is appropriate for rearranging the results of di fferent scales as well as for the negative values of the PROMETHEE method. However, linear normalization has a disadvantage. Applying Wietendorf's linear normalization, the estimate of the worst alternative is converted into zero, and thus the weight of the influence of the method for determining the worst alternative has no e ffect on the combined result. The result data managed by applying classical normalization are convenient to be compared because the sum of all results is equal to one. In the case of classical normalization, the negative results of the methods require additional data transformation. The author of this paper proposes a method of transforming negative numbers. Hence, the normalization method had no influence on the final combined result in this task.

The article provides a method for verifying the stability of the MADM method, which ensures the validity of the evaluated result. The technique for validating the stability of the MADM method has a wide range of practical usability in di fferent decision-making problems where evaluation is performed by employing several MADM methods. The proposed method considers the uncertainty of data on expert evaluation and therefore decreases the level of the subjectivity of the conducted evaluation. Further papers will focus more intensely on analyzing the sensitivity of fuzzy AHP methods by fluctuating the data and on investigating several algorithms of FAHP methods.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.
