Next Article in Journal
How to Promote the Performance of Parametric Volatility Forecasts in the Stock Market? A Neural Networks Approach
Next Article in Special Issue
Interactive Multiobjective Procedure for Mixed Problems and Its Application to Capacity Planning
Previous Article in Journal
Quantum Heat Engines with Complex Working Media, Complete Otto Cycles and Heuristics
Previous Article in Special Issue
Market Choices Driven by Reference Groups: A Comparison of Analytical and Simulation Results on Random Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Statistical Discrepancy and Affinity of Priority Vector Heuristics in Pairwise-Comparison-Based Methods

by
Pawel Tadeusz Kazibudzki
Faculty of Economics and Management, Opole University of Technology, St. Luboszycka 7, 45-036 Opole, Poland
Entropy 2021, 23(9), 1150; https://doi.org/10.3390/e23091150
Submission received: 16 July 2021 / Revised: 17 August 2021 / Accepted: 30 August 2021 / Published: 1 September 2021
(This article belongs to the Special Issue Decision Making, Classical and Quantum Optimization Methods)

Abstract

:
There are numerous priority deriving methods (PDMs) for pairwise-comparison-based (PCB) problems. They are often examined within the Analytic Hierarchy Process (AHP), which applies the Principal Right Eigenvalue Method (PREV) in the process of prioritizing alternatives. It is known that when decision makers (DMs) are consistent with their preferences when making evaluations concerning various decision options, all available PDMs result in the same priority vector (PV). However, when the evaluations of DMs are inconsistent and their preferences concerning alternative solutions to a particular problem are not transitive (cardinally), the outcomes are often different. This research study examines selected PDMs in relation to their ranking credibility, which is assessed by relevant statistical measures. These measures determine the approximation quality of the selected PDMs. The examined estimates refer to the inconsistency of various Pairwise Comparison Matrices (PCMs)—i.e., W = (wij), wij > 0, where i, j = 1,…, n—which are obtained during the pairwise comparison simulation process examined with the application of Wolfram’s Mathematica Software. Thus, theoretical considerations are accompanied by Monte Carlo simulations that apply various scenarios for the PCM perturbation process and are designed for hypothetical three-level AHP frameworks. The examination results show the similarities and discrepancies among the examined PDMs from the perspective of their quality, which enriches the state of knowledge about the examined PCB prioritization methodology and provides further prospective opportunities.

1. Introduction

The method of creating a ranking based on pairwise comparisons of alternatives was already known in the Middle Ages. The first work on this subject was probably that by Ramon Lull [1], who described election processes based on comparisons of mutual alternatives. Over time, other research on the pairwise comparisons method appeared; e.g., studies on electoral systems, such as the Condorcet method and the Copeland method, and many others on social choice and welfare systems [2]. In time, alternatives began to be compared quantitatively, which was initially connected with the need to compare psychophysical stimuli [3,4]. This path was later developed [5] and used in various forms for different objectives, including economics [6], consumer research, psychometrics, health care and others. Thanks to Saaty and his seminal paper [7] in which he defined the Analytic Hierarchy Process (AHP), comparing alternatives in a pairwise mode began to be considered basically as a multi-criteria decision-making method. The undisputable success of the AHP is probably due to the fact that Saaty proposed a complete solution including a ranking calculation algorithm, an inconsistency index as a method of determining data quality and a hierarchical model allowing decision makers (DMs) to handle multiple criteria [8,9,10,11]. Overtime, numerous studies have presented scientific evidence of the fundamental flows of the AHP; see, e.g., [12,13,14,15].
However, different research studies concerning the pairwise comparison method have resulted in many priority deriving methods (PDMs) which, for brevity, are not discussed in this article in detail; however, the interested reader may want to find references in which these methods are scrutinized (e.g., [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]). In addition, various inconsistency approaches for human judgments were devised, which also are not discussed in this paper in detail as they are not within the scope of this research. It seems probable that the two most popular PDMs for PCB ranking problems are the Principal Right Eigenvalue Method (PREV), proposed by Saaty [12,49,50] and the Logarithmic Least Squared Method (LLSM), also known as Geometric Mean Method (GMM), devised by Crawford and Williams [21,22]. A further well-known PDM is the Simple Normalized Column Sum Procedure (SNCS), proposed by Zahedi [47] and promoted, e.g., by Choo and Wedley [19] and Saaty [51]. Indeed, an underestimated and overlooked method in the literature is the last PDM selected for this research due to its features; i.e., the Logarithmic Squared Deviations Minimization Method (LSDM), elaborated and evaluated by Kazibudzki [30,52].
It is easy to verify that in the case of consistent human judgments that provide cardinally transitive Pairwise Comparison Matrices (PCM), all PDMs lead to the same solution. However, when inconsistent PCMs must be taken into account, the resulting rankings differ from each other. This research is part of the discussion of the properties of various PDMs applied for pairwise-comparison-based (PCB) problems, which are often examined with the application of the AHP [53]. Despite the large number of publications on the topic, this issue is considered inspiring and challenging as it seems that pairwise comparisons could lead to credible measures of DM preferences. Thus, deriving true priority vectors (PVs) from intuitive pairwise comparisons of decision makers (DMs) is also a crucial issue for the multiple criteria decision-making (MCDM) concept based on the AHP. Noticeably, the standard AHP applications commonly utilize PREV because it was derived mathematically by the creator of the AHP, who considered it the only correct solution for PCB problems [42,54,55,56].
The objective of this scientific research is to examine the similarities and discrepancies of a few selected PDMs in order to determine their suitability for PCB problems in relation to their ranking credibility. Thus, it was decided to apply Monte Carlo simulations (MCS) for this purpose. However, rather than simulating and analyzing results for a single PCM, as has been done thus far by many other authors, it was decided to design simulation scenarios and analyze their outcomes from the simple multi-criteria decision perspective, examined via the most common AHP framework, which thus far has been undertaken by only a few authors; e.g., [30,57]. As such, it is assumed that the three-level AHP model—alternatives, criteria and goal—is considered, which is assumed to deal with the hypothetical decision problems. Then, the simulation results for selected PDMs are compared. The examination results present the effect of various scenarios within the simulation process and reflect human judgment errors during pairwise comparisons. Thus, selected PDMs are examined from the perspective of their ranking credibility, which is evaluated with the application of a few available statistical measures; e.g., the Mean Spearman Rank Correlation Coefficient (MSRC), Mean Pearson Correlation Coefficient (MPCC) and Mean Average Absolute Deviation (MAAD). These measures determine the differences in the quality of PV estimation, understood as a true ranking preservation capability of the selected PDM, and are selected intentionally while considering that other non-statistical compatibility indices exist in the literature: e.g., the Garuti index [58] or Saaty compatibility index [59]. Certainly, different MCDM methods may yield different results when applied to the same problem [60], and that is why a single method application, such as AHP, can also lead to different priorities. This phenomenon has been already studied previously, mainly by focusing on the ranks that different priorities imply [61]. This research paper aims to extend the focus of this study from the ordinal to a cardinal focus, where differences in the ranks are considered. That is why emphasis is placed on the rank preservation phenomenon instead of the closeness of obtained priority vectors (PVs) as it has applicability during compatibility analysis.
Given the reality of our physical world, no study is perfect. In order to compare the characteristics of the estimates obtained in the simulation process for the selected PDM, various scenarios were simulated in relation to different sources of PCM inconsistency. Fundamentally, PCM inconsistency commonly results from errors caused by the nature of human judgments and errors due to the technical realization of the pairwise comparison procedure; i.e., rounding errors and errors resulting from the forced reciprocity requirement commonly imposed in PCB ranking problems. All the above errors can be simulated, but the nature of human judgments can be represented only as the realization of some random process in accordance with the assumed probability distribution of the perturbation factor; e.g., uniform, gamma, truncated normal and log-normal. As this is only a stochastic process generated by a computer, it constitutes a certain limitation of the presented research.
The research paper is structured as follows: firstly, preliminaries about pairwisecomparison-based problems are presented (Section 2); then, the examination methodology is introduced and exemplified (Section 3) in two subsections: Section 3.1 is devoted to the concept design with preliminary results, and Section 3.2 introduces the target examination scenario and presents further examples of examination results. The results of the complete examination and a discussion are presented in Section 4, leading to our research conclusions (Section 5); finally, we end the paper with final remarks.

2. Preliminaries about PCB Ranking Problems

The conventional PDM in the AHP is founded on the mathematical structure of consistent PCMs and the related capability of PREV to produce actual or approximate weights.
Oscar Perron proved that if W = (wij), wij > 0, where i, j = 1,…, n, then W has a simple positive eigenvalue λmax called the principal eigenvalue of W, and λmaxk| for the remaining eigenvalues of W. Moreover, the principal right eigenvector w = [w1,…, wn]T, which is a solution of Ww = λmaxw, has wi > 0, i = 1,…, n. If the relative weights of a set of activities are known, they can be expressed as PCMs.
If we know W(w) but do not know w, we can use Perron’s theorem to solve this problem for w. The solution leads to n unique values for lambda, with a bounded vector w for each of the n values. The PCM matrix in the AHP reflects the relative weights of the actions considered (criteria, scenarios, players, alternatives, etc.), so the matrix W(w) has a particular shape. Each subsequent row of this matrix is a constant multiple of the first row. In this case, the matrix W(w) has only one non-zero eigenvalue, and since the sum of the eigenvalues of a positive matrix is equal to the sum of its diagonal elements, the single non-zero eigenvalue in this case is equal to the size of the matrix and can be denoted as lambda(max) = n. The norm of the vector w can be written as w = e T w , where e = [1, 1,..., 1]T and w can be normalized by dividing by its norm. For the sake of clarity, w is further specified in its normalized form.
Taking the above into consideration, the conventional concept of a PDM in the AHP can be presented as follows:
w 1 / w 1 w 1 / w 2 w 1 / w 3 w 1 / w n w 2 / w 1 w 2 / w 2 w 2 / w 3 w 2 / w n w 3 / w 1 w 3 / w 2 w 3 / w 3 w 3 / w n w n / w 1 w n / w 2 w n / w 3 w n / w n × w 1 w 2 w 3 w n = n w 1 w 2 w 3 w n
Thus, the following definitions (D) can be also presented:
  • D[1]: If the matrix W(w) elements satisfy the prerequisite wij = 1/wji for all i, j = 1,…, n, then the matrix W(w) is called reciprocal.
  • D[2]: If the following assumptions are true: (a) if for any i = 1,…, n, an element wij is not less than an element wik, then w i j w i k for i = 1,…, n, and (b) if for any i = 1,…, n, an element wji is not less than an element wki, then w j i w k i for i = 1,…, n, and the matrix W(w) is called ordinal transitive,
  • D[3]: If the elements of a matrix W(w) satisfy the condition wikwkj = wij for all i, j, k = 1,…, n, and the matrix is reciprocal, then it is called consistent or cardinal transitive.
Certainly, when the AHP is utilized, a W(w) which would reflect the true weights given by the actual PV is unknown.
Since the human mind is not an accurate measuring device, it does not give accurate results in tasks such as the following: “compare—using a given ratio scale—your preferences for alternative 1 and alternative 2”. Thus, W(w) is not known, but only its estimate X(x) containing intuitive judgments that are more or less close to W(w) depending on the DM’s individual taste, specific knowledge, experience, ability and even momentary mood or frame of mind. In such cases, the consistency property does not apply, and the conventional notion of the PDM in the AHP is no longer applicable. However, it has been shown that for any PCM, small perturbations in the items entail similar perturbations in the eigenvalues, so Perron’s theorem can be used to estimate the true PV. Then, instead of matrix equation (1), the solution of matrix equation (2) gives us w as the right principal eigenvector (PREV) associated with λmax.
x 1 / x 1 x 1 / x 2 x 1 / x 3 x 1 / x n x 2 / x 1 x 2 / x 2 x 2 / x 3 x 2 / x n x 3 / x 1 x 3 / x 2 x 3 / x 3 x 3 / x n x n / x 1 x n / x 2 x n / x 3 x n / x n × w 1 w 2 w 3 w n = λ max w 1 w 2 w 3 w n
In practice, the solution of PREV is obtained by raising the matrix X(x) to a sufficiently large power, then summing the rows of X(x) and normalizing the resulting vector to obtain a vector w, which can be expressed by the following formula:
w = lim k X k ( x ) × e e T × X k ( x ) × e
where e = [1, 1,…, 1]T.
Noticeably, the relation between elements of X(x) and W(w) can be expressed in the form of the following formula:
x i j = e i j w i j
where eij is a disturbance coefficient oscillating close to unity; e.g., eij∈[0.5, 1.5]. It is important to emphasize that in the statistical approach, eij reflects the realization of a random variable with a provided probability distribution (PD) that can be modeled and that reflects imperfect human pairwise comparisons. In the literature, the following types of PDs are often considered for different implementation purposes: gamma, log-normal, truncated-normal or uniform [36,47]. However, in addition to these most popular types of PDs, one can also find applications of Cauchy, Laplace, triangular or beta PDs [24] and Fisher–Snedecor PDs, which were recently introduced by Kazibudzki [30]. Usually, the maximal spread for e i j 0.01 ,   1.99 , which may be perceived as strange as this interval is highly asymmetric. However, it is perfectly reasonable to have asymmetric intervals for perturbation factors as DM judgments are reflected with the application of a particular numerical scale whose numbers are usually not higher than nine (Saaty’s scale). So, multiplying five by more than two, for example, is simply pointless, as the result must still be rounded to the nearest value of the scale; e.g., Saaty’s scale, whose maximal value equals nine. On the other hand, multiplying 9 by 0.01 gives 0.09, which can be naturally rounded to 0.1(1) and may reflect a reversal phenomenon of preferences, for example. However, the symmetric interval for a perturbation factor with a seminal proposed PD and its application outcome is also presented in this research in Section 4.
In general, the discrepancy of the perturbed PCM reflects the results of errors caused by the nature of human judgment and errors due to the technical implementation of the pairwise matching procedure. The latter are mainly due to rounding errors and errors resulting from forced reciprocity requirements. Rounding errors are related to the numerical ratio scale, the values of which must be used by future DMs to express their judgments in a certain way [25,62,63,64]. In common AHP applications, the Saaty numerical scale, consisting of integers from 1 to 9 and their inverse values, is by far the most popular. However, other scales are also known [25,65,66,67,68,69,70], such as the geometric scale, for which the linguistic variables of the Saaty scale have different numerical values; the most common, and thus the approach used in this study, is 2n/2, where n includes integers from minus 8 to 8, but an arbitrarily defined numerical scale including integers from 1 to n and their inverse values is also possible.
The basic concept of the AHP certainly attracts attention and as such is being developed; see, e.g., [37,71,72,73,74,75,76]. At the same time, it is argued that as long as inconsistency in pairwise comparisons is allowed, PREV is the fundamental theoretical concept for ranking PCB problems and no other PDM matches it. At the same time, a number of other PDMs have been proposed over the past three decades, starting with the most popular LLSM [21,22] and other methods based on optimization models with constraints (see [28,52,62,77] ), including the least-squares method [36,47] and various versions of goal programming methods (see [18,19,27,78,79,80]), as well as methods based on some statistical concepts (see [12,16,34,46,61,81,82]), methods based on fuzzy preference descriptions (see [48,72,83]) and heuristic algorithms (see [33,35,84,85]).
It has been argued that the primary AHP–PDM—i.e., the PREV method—is necessary and sufficient for unambiguous ranking with the ratio scale inherent in inconsistent pairwise comparisons [42,50,54,55,86,87]. However, this approach has also been heavily criticized; see, for example, [13,30,63,80,88,89,90,91,92]. Therefore, there are optional PDMs that differ from the basic PDM concept. Many of these are based on optimization and finding the vector w as a solution to the minimization problem given by the formula
min D(X(x), W(w))
with some accompanying constraints such as positive coefficients and the normalization condition. Since the distance function D measures the interval between the matrices X(x) and W(w), different ways of defining this lead to different PDM estimation results. Chu et al. in [19] describe and compare 18 PDMs, although some authors suggest that only 15 are distinctive. Undoubtedly, several other PDMs have appeared in the literature since Chu et al. [19] published their study; see, for example, [29,32,33,34,93,94,95]. Obviously, if the PCM is consistent, then all known PDMs match, although they do not guarantee that the resulting PV is error-free; see, e.g., [12,45,70,96]. However, in real situations, as noted earlier, human judgments inevitably lead to inconsistent PDMs, since inconsistency is a natural consequence of the dynamics of the human mind as well as a consequence of query methodology, incorrect inputs of judgment values and scaling procedures (i.e., rounding errors).
In this research study, apart from PREV, three optional PDMs are examined. They are defined by the formulae presented in Table 1.

3. Examination Methodology

3.1. Concept Design with Preliminary Results

The first step in PCB problems using the AHP is to create a hierarchy by breaking the specific problem into its major components. The basic AHP scenario includes a goal (an expression of the overall objective), criteria (factors to be considered in arriving at the final choice) and alternatives (feasible alternatives to achieve the final objective). Thus, the most basic AHP decision model consists of an objective–criteria–alternatives sequence. Therefore, this study adopts a basic three-level hierarchy including three criteria and three alternatives within each criterion.
The intent of this research is to examine the performance of PREV against the background of the performance of other selected PDMs available for PCB problems elaborated within the AHP. In order to achieve this objective, Monte Carlo simulations (MCS) were applied, but not as commonly performed; i.e., dedicated to a single PCM. This research involves an MCS scenario that encompasses the entire goal–criteria–alternatives sequence of the AHP, which is supposed to reflect the hypothetical PCB decisional problem (see the examples presented hereafter (Examples 1A and 1B)).
Firstly, the examination framework is presented in its simplified version as a methodological example. Thus, only technical distortions are used resulting from rounding errors during the application of Saaty’s scale and standard requirements of the AHP; i.e., forced reciprocity is demonstrated in the following hypothetical AHP model with three levels (a goal, four criteria and four alternatives). This model assumes that relative ratios of some physical attributes of certain objects are predetermined, and thus H G P V C , H C 2 C 1 P V A and H C 4 C 3 P V A are known. On the basis of the provided PV elements, the respective PCMs are reconstructed as shown in Equation (1).
Example 1A:
H G P V C , and its related PCM denoting the weights quotients of H G P V C , reflecting the pairwise comparison results of criteria with respect to the goal:
C 1           C 2 C 3         C 4 C 1 C 2 C 3 C 4 1 1.4 3.5 1.16667 0.714286 1 2.5 0.833333 0.285714 0.4 1 0.333333 0.857143 1.2 3 1 H G P V C 0.35 0.25 0.10 0.30
H C 2 C 1 P V A , and its related PCM denoting the weights quotients of H C 2 C 1 P V A , reflecting the pairwise comparison results of alternatives with respect to criteria C1–C2:
            A 1           A 2         A 3           A 4 A 1 A 2 A 3 A 4 1 1.4 2.33333 1.4 0.714286 1 1.66667 1 0.428571 0.6 1 0.6 0.714286 1 1.66667 1 H C 2 C 1 P V A 0.35 0.25 0.15 0.25
H C 4 C 3 P V A , and its related PCM denoting the weights quotients of H C 4 C 3 P V A , reflecting the pairwise comparison results of alternatives with respect to criteria C3–C4:
A 1           A 2                   A 3                 A 4 A 1 A 2 A 3 A 4 1 0.666667 0.285714 0.25 1.5 1 0.428571 0.375 3.5 2.33333 1 0.875 4 2.66667 1.14286 1 H C 4 C 3 P V A 0.10 0.15 0.35 0.40
where H G P V A , H C 2 C 1 P V A and H C 4 C 3 P V A denote partial hypothetical PVs in the model.
After standard AHP synthesis, the hypothetical total PV (HTPV) is obtained and given as HTPV = [0.25, 0.21, 0.23, 0.31]T. Next, following the simplified version of the MCS scenario in this example, each PCM in the presented framework is perturbed. For simplicity of illustration, only two kinds of prospective PCM distortions are applied: rounding errors (each element of the particular PCM is rounded to Saaty’s scale) and reciprocity imposition errors (the PCM is transformed to be reciprocal in the way that only elements above its diagonal are taken into consideration, and elements below its diagonal are replaced by the inverses of their counterparts from above its diagonal). Next, on the basis of each PCM being perturbed in the above way, respective partial PVs (PPVPREV) are obtained with the application of the selected PDM—i.e., PREV. Finally, the total computed PV (TCPVPREV) for the exemplary model of the AHP is obtained (see Example 1B).
Example 1B:
PCM with criteria weights designated with respect to the goal and P P V P R E V G O A L , computed on the basis of this PCM with the application of PREV:
C 1 C 2 C 3 C 4 C 1 C 2 C 3 C 4 1 1 3 1 1 1 2 1 1 / 3 1 / 2 1 1 / 3 1 1 3 1 P P V P R E V G O A L 0.304999 0.276859 0.113143 0.304999
PCM with weights of alternatives designated with respect to criteria C1–C2 and P P V P R E V C 1 C 2 , computed on the basis of this PCM with the application of PREV:
A 1 A 2 A 3 A 4 A 1 A 2 A 3 A 4 1 1 2 1 1 1 2 1 1 / 2 1 / 2 1 1 / 2 1 1 2 1 P P V P R E V C 1 C 2 0.285714 0.285714 0.142857 0.285714
PCM with weights of alternatives designated with respect to criteria C3–C4 and P P V P R E V C 3 C 4 , computed on the basis of this PCM with the application of PREV:
A 1 A 2 A 3 A 4 A 1 A 2 A 3 A 4 1 1 / 2 1 / 4 1 / 4 2 1 1 / 2 1 / 3 4 2 1 1 4 3 1 1 P P V P R E V C 3 C 4 0.0887547 0.1611320 0.3550190 0.3950950
After standard AHP synthesis, the following result is obtained: TCPVPREV = [0.2034, 0.2336, 0.2316, 0.3315]T, which is different from HTPV = [0.25, 0.21, 0.23, 0.31]T. The comparison of HTPV with its estimate TCPVPREV enables selected statistical measures to be used—i.e., Spearman Ranks Correlation Coefficient (SRCC), Pearson Correlation Coefficient (PCC) and Mean Absolute Deviation (MAD)—which reflect the approximation quality of PREV. Mean values of the above measures were examined during the MCS in this research; the formulae for these are provided in Table 2.
For the above illustrative values of HTPV and TCPVPREV, the presented measures are as follows: SRCC = 0.2, PCC = 0.8142, MAD = 0.023325. Noticeably, the comparison of the approximation quality of any PDM available for PCB ranking problems is possible in this way. Thus, it is also possible to examine the selected PDM for the AHP. The MCS designed for this purpose—i.e., processing the algorithm which exactly emulated the above explained steps 10,000 times—provided the scores shown in Table 3 and Table 4.
Considering the results presented in Table 3 and Table 4, it should be noticed that from the perspective of rank, the preservation capability of PREV is slightly lower than that of LSDM. It also can be noticed that LSDM performs slightly worse than PREV from the perspective of MPCC and MAAD values. The performance of other examined PDMs is slightly worse in both scenarios and from the perspective of all performance measures taken into consideration during this study.

3.2. Research Target Scenario Analysis with Further Results

Hereafter, the examination scenario is exemplified in its target version. Thus, not only are technical distortions resulting from rounding errors during the application of Saaty’s scale and standard requirements of the AHP will be used—i.e., forced reciprocity is demonstrated—but also human judgment errors are considered in the hypothetical AHP model with three levels (goal, four criteria and four alternatives) earlier depicted as in Example 1A. Hence, it is still assumed that relative ratios of some physical attributes of certain objects are predetermined; thus, H G P V C , H C 2 C 1 P V A and H C 4 C 3 P V A are known, and their respective PCMs are computed as in Equation (1). For the reader’s convenience, the model is duplicated herein and renamed as Example 2A for reference hereafter.
Example 2A:
For the assumed H G P V C and its related PCMs representing the weights quotients of H G P V C , reflecting the pairwise comparison results of criteria with respect to the goal, see Equation (6). For the assumed H C 2 C 1 P V A and its related PCMs denoting the weights quotients of H C 2 C 1 P V A , reflecting the pairwise comparison results of alternatives with respect to criteria C1–C2, see Equation (7). For the assumed H C 4 C 3 P V A and its related PCM denoting the weights quotients of H C 4 C 3 P V A , reflecting the pairwise comparison results of alternatives with respect to criteria C3–C4, see Equation (8).
Next, following the target version of the MCS scenario in this example, each PCM in the presented framework of Example 2A is perturbed. This time, three kinds of prospective PCM distortions are applied: human judgment errors reflected by the applied perturbation factor eij = 0.5, rounding errors (each element of the particular PCM is rounded to Saaty’s scale) and reciprocity imposition errors (the PCM is transformed to be reciprocal in a way that means only elements from above its diagonal are taken into consideration, and elements below its diagonal are replaced by inverses of their counterparts from above its diagonal). On the basis of each PCM being perturbed in the above way, respective partial PVs (PPVPREV) are obtained with application of the selected PDM; i.e., PREV. Finally, the total computed PV (TCPVPREV) for the exemplary model of the AHP is obtained; see Examples 2B–2D.
Example 2B:
Pairwise comparison results of criteria with respect to the goal after the implementation of the perturbation factor eij = 0.5:
    C 1       C 2   C 3         C 4 C 1 C 2 C 3 C 4 1 0.7 1.75 0.5833 0.3571 1 1.25 0.4167 0.1429 0.2 1 0.1667 0.4286 0.6 1.5 1
Pairwise comparison results of alternatives with respect to criteria C1–C2 after the implementation of the perturbation factor eij = 0.5:
    A 1       A 2       A 3         A 4   A 1 A 2 A 3 A 4 1 0.7 1.1667 0.7 0.3571 1 0.8333 0.5 0.2143 0.3 1 0.3 0.3571 0.5 0.8333 1
Pairwise comparison results of alternatives with respect to criteria C3–C4 after the implementation of the perturbation factor eij = 0.5:
A 1         A 2           A 3           A 4     A 1 A 2 A 3 A 4 1 0.3333 0.1429 0.125 0.75 1 0.2143 0.1875 1.75 1.1667 1 0.4375 2 1.3333 0.5714 1
Example 2C:
Pairwise comparison results of criteria with respect to the goal after the implementation of the perturbation factor eij = 0.5 and rounding errors (each element of the particular PCM is rounded to Saaty’s scale):
  C 1       C 2   C 3     C 4   C 1 C 2 C 3 C 4 1 0.5 2 0.5 0.3333 1 1 0.5 0.1429 0.2 1 0.1667 0.5 0.5 2 1
Pairwise comparison results of alternatives with respect to criteria C1–C2 after the implementation of the perturbation factor eij = 0.5 and rounding errors (each element of the particular PCM is rounded to Saaty’s scale):
A 1           A 2     A 3     A 4 A 1 A 2 A 3 A 4 1 0.5 1 0.5 0.3333 1 1 0.5 0.2 0.3333 1 0.3333 0.3333 0.5 1 1
Pairwise comparison results of alternatives with respect to criteria C3–C4 after the implementation of the perturbation factor eij = 0.5 and rounding errors (each element of the particular PCM is rounded to Saaty’s scale):
A 1         A 2         A 3           A 4 A 1 A 2 A 3 A 4 1 0.3333 0.1429 0.125 1 1 0.2 0.2 2 1 1 0.5 2 1 0.5 1
Example 2D:
Pairwise comparison results of criteria with respect to the goal after the implementation of the perturbation factor eij = 0.5, rounding errors (each element of the particular PCM is rounded to Saaty’s scale) and forced reciprocity errors (the PCM is transformed to be reciprocal in such a way that only elements from above its diagonal are taken into consideration, and elements below its diagonal are replaced by inverses of their counterparts from above its diagonal), with P P V P R E V G O A L computed on the basis of the obtained PCM with the application of PREV:
C 1 C 2 C 3 C 4 C 1 C 2 C 3 C 4 1 0.5 2 0.5 2 1 1 0.5 0.5 1 1 0.1667 2 2 6 1 P P V P R E V G O A L 0.18215 0.22278 0.12115 0.47392
Pairwise comparison results of alternatives with respect to criteria C1–C2 after the implementation of the perturbation factor eij = 0.5, rounding errors (each element of the particular PCM is rounded to Saaty’s scale) and forced reciprocity errors (the PCM is transformed to be reciprocal in such a way that only elements from above its diagonal are taken into consideration, and elements below its diagonal are replaced by inverses of their counterparts from above its diagonal), with P P V P R E V C 1 C 2 computed on the basis of the obtained PCM with the application of PREV:
A 1 A 2 A 3   A 4         A 1 A 2 A 3 A 4 1 0.5 1 0.5 2 1 1 0.5 1 1 1 0.3333 2 2 3 1 P P V P R E V C 1 C 2 0.16406 0.23278 0.17510 0.42806
Pairwise comparison results of alternatives with respect to criteria C3–C4 after the implementation of the perturbation factor eij = 0.5, rounding errors (each element of the particular PCM is rounded to Saaty’s scale) and forced reciprocity errors (the PCM is transformed to be reciprocal in such a way that only elements from above its diagonal are taken into consideration, and elements below its diagonal are replaced by inverses of their counterparts from above its diagonal), with P P V P R E V C 3 C 4 computed on the basis of the obtained PCM with the application of PREV:
A 1     A 2           A 3           A 4     A 1 A 2 A 3 A 4 1 0.3333 0.1429 0.125 3 1 0.2 0.2 7 5 1 0.5 8 5 2 1 P P V P R E V C 3 C 4 0.04698 0.10012 0.34774 0.50517
After standard AHP synthesis, the following result is obtained: TCPVPREV = [0.09439, 0.15383, 0.27783, 0.47394]T, which again is different from HTPV = [0.25, 0.21, 0.23, 0.31]T. Comparing HTPV with the estimated TCPVPREV, the performance measures mentioned before—i.e., SRCC, PCC and MAD—which reflect the approximation quality of PREV can be established again. For the above exemplary values of HTPV and TCPVPREV, these measures are different than before and are as follows: SRCC = 0.4, PCC = 0.76944, MAD = 0.10589. Surprisingly, SRCC in this case is twice as high as in the first example, although this time, more PCM distortions were applied.

4. Results of Complete Examination with Discussion

Noticeably, the comparison of the approximation quality of any PDM available for PCB ranking problems is possible as presented in Section 3. It is thus possible to examine a few selected PDMs for the AHP. Thus, the MCS designed for this purpose—i.e., processing the algorithm emulating steps from Examples 2A–2D 25,000 times (250 distinctive AHP frameworks perturbed 100 times each)—provided the correlation scores presented in Table A1, Table A2, Table A3 and Table A4. However, in Table 5, Table 6, Table 7 and Table 8, discrepancies among correlation scores obtained by the examined PDMs are presented in relation to the selected referential values, which in these cases constitute the correlation scores obtained by PREV.
The tables should be read from left to right and row by row from the top to the bottom. In every first three columns on the left side of the tables, simulation parameters taken into consideration are provided; i.e., the applied preference scale, the interval for the perturbation factor, and sets for alternatives and criteria applied during MCS. Then, differences between performance statistics are given for each simulation scenario.
It should be emphasized here that many PDMs have been proposed thus far, and their effectiveness has been evaluated by various means. Various research and different measures of the effectiveness of PDMs lead to different conclusions [97,98]. For example, Choo et al. [99] in their research recommend LLSM as the best PDM with a simple formula for computing PVs, equipped with many desirable properties discovered by Fichtner [100], and the method is very popular as the best alternative for PREV. On the other hand, some support for SNCS also exists—e.g., [36,47,101]—and LSDM is also considered as an efficient PDM; see, e.g., [30]. Basically, it is agreed that many research results, including those of this work, do not provide support for the recommendation by Saaty and Vargas [42] and Saaty and Hu [55] that PREV reputedly is the only PDM which should be used when pairwise comparisons are not entirely consistent. It is also agreed, as stated by Golany and Kress [102], that the selection of a PDM for PCB problems should be dictated by the desired measure of the PDM’s effectiveness, as different error measures support mathematically different PDMs. Hence, as suggested by Bajwa et al. [101], the defining question is not which PDM is superior, but which application results are expected and/or what level of effectiveness or performance criteria are more valued. It is also believed that this research supports the conclusion stated by Saaty and Hu [55] that there is a difference between metric topology and order topology, where in the former the central concern is closeness and in the latter both closeness and order preservation features are equally important. It can be agreed after all these years of research that none of the examined PDMs have been found to be universally superior to all others in all aspects. However, to the best of our knowledge, for the first time, a statistical foundation has been created to evaluate scenarios when PDMs coincide and when their discrepancies are statistically significant. Hence, the possibility was created for a DM to assess the risk of accepting an ineffective PDM or rejecting an effective PDM—the standard problem known to every statistician and very important to each DM during the statistical evaluation of decisional options; i.e., statistical alternative hypothesis testing.
For this research, four distinctive PDM have been selected on the basis of different criteria. PREV is studied because it was conceived with the AHP. LLSM is considered because it has a simplified form and is usually promoted as the best alternative for PREV. SNCS is taken into consideration here for its simplicity and good effectiveness as shown in, e.g., [19,47]. LSDM is examined because it combines spectral theory with an optimization-based approach to PCB problems.
As stated earlier, all the above PDMs have been more or less intensively studied and have shown their effectiveness, efficiency and desired analytical properties [34,103]. They have been evaluated from the perspective of various measures of effectiveness; e.g., Mean Square Error (MSE), Mean Absolute Deviation (MAD), Mean Central Conformity (MCC), Mean Rank Violation (MRV) (see, e.g., [19,36,47,102]), the Coefficient of Multiple Determination (CMD = R2), which is widely applied in regression analysis (see, e.g., [104]), the Garuti Compatibility Index (GCI) and the Saaty Compatibility Index (SCI) (see, e.g., [58,105,106]). However, in this research, focus was given to statistical measures of examined phenomena and their statistical significance; thus, emphasis was placed on the introduced PDM approximation quality measures; i.e., mostly MSRC, but also MPCC and MAAD. Thus, the discrepancies in the correlation scores presented in Table 5, Table 6, Table 7 and Table 8 are analyzed from the perspective of the PDM rank preservation capability designated by MSRC and the general correlation significance determined by MPCC.
As can be seen, all PDMs perform steadily under the four MCS scenarios presented in Table 5, Table 6, Table 7 and Table 8. It should be noted that PREV does not always outperform the other PDMs in this study; in many cases, it is actually the other way round. The selected PDMs for the examination perform better from the perspective of the approximation quality represented by MSRC. In relation to this phenomenon, it occurs that LSDM dominates over PREV most often in comparison with the other PDMs and from the perspective of all applied MCS scenarios (see the numerical lower subscript of the particular PDM for information on how many times the indexed PDM prevails PREV). This is an important piece of information as the approximation quality of the DM preference intensities of any PDM seems a very crucial issue in PCB problems.
Fortunately, the information provided in Table 5, Table 6, Table 7 and Table 8 can be analyzed from the statistical perspective because the significance of the difference between any two correlation coefficients (CC), denoted as CC[1] and CC[2], can be tested using “t” statistics defined by the following formula:
t = R n 2 / 1 R 2
where R is the difference between particular CC.
These statistics have a distribution of t–student with n-2 degrees of freedom df, where n equals the size of the sample. Thus, the following hypothesizes can be tested:
H 0 :   CC 1 CC 2 = 0   versus   H 1 :   CC 1 CC 2 > 0 ,
and conversely,
H 0 :   CC 1 CC 2 > 0   versus   H 1 :   CC 1 CC 2 0 .
Hence, the following conclusions can be drawn from data provided in Table 5, Table 6, Table 7 and Table 8. If the performance of a particular PDM differs from the performance of PREV by less than 0.00160 (t = 0.252972417), then it can be assumed with an 80% confidence level (t = 0.253) that its performance discrepancy in relation to the performance of PREV is negligible. For an 85% confidence level (t = 0.1895), this discrepancy should be smaller than 0.00118 (t = 0.186567049), and for a 90% confidence level (t = 0.126), it should be smaller than 0.00076 (t = 0.120161779).
On the other hand, if the performance of the particular PDM differs from the performance of PREV by more than 0.00815 (t = 1.288619398), then it can be assumed with an 80% confidence level (t = 1.282) that its performance discrepancy in relation to the performance of PREV is significant, and this discrepancy should not be neglected. For an 85% confidence level (t = 1.452) this discrepancy should be greater than 0.00920 (t = 1.454651099), and for a 90% confidence level (t = 1.645), it should be greater than 0.01050 (t = 1.660220885).
To be able to more completely examine this issue, the MAAD scores for four studied scenarios are presented in Table A5, and discrepancies among the MAAD scores of three PDMs in relation to PREV for four studied scenarios are presented in Table 9.
It can be noticed that all examined PDMs perform quite similarly from the perspective of their approximation quality evaluated by their MAAD. Nevertheless, considering MAAD as the performance criterion, LSDM outperforms PREV 21 times, LLSM is better than PREV 22 times and SNCS outperforms PREV 9 times. In conclusion, as PREV is quite frequently criticized in the literature, this outcome should not be surprising.
Taking into account the above examination results, it was decided to analyze one more scenario with MCS. Generally, the algorithm applied for MCS this time is an expanded version of the approach previously applied and presented in earlier examples provided in this research in Section 4. The algorithm can be specified as follows:
  • Step 1. Generate a random PV;—i.e., k = [k1,..., kn]T of size [n x 1]—for the criteria and the associated original unbiased PCM(k) = K(k).
  • Step 2. Randomly generate exactly n PVs—i.e., an = [an,1,..., an,m] of size [m x 1]—for the alternatives under each criterion and the associated original unbiased PCMs(a) = An(a).
  • Step 3. Compute the joint priority vector w of size [m x 1] by the following procedure:
    wx = k1a1, x + k2a2, x +...+ knan, x
  • Step 4. Randomly select a number e from the given interval [α,β] based on the given PD.
  • Step 5. Use step 5A and step 5B separately:
    Step 5A is a case of applying forced reciprocity to the PCM:
    Replace all elements aij for i < j of all An(a) with eaij and all elements kij for i < j of K(k) with ekij.
    Step 5B is the case of the acceptance of nonreciprocal PCM:
    Replace all elements aij for i ≠ j of all An(a) with eaij and all elements kij for i ≠ j of K(k) with ekij.
  • Step 6. Use steps 6A and 6B separately:
    Step 6A—if Step 5A is satisfied,
    Round all values of elements aij for i < j of all An(a) and all values of elements kij for i < j of K(k) to the nearest values of the scale under consideration, then replace all elements aij for i > j of all An(a) by 1/aij and all elements kij for i > j of K(k) by 1/kij.
    Step 6B—after completing step 5B,
    Round all values of elements aij for i ≠ j of all An(a) and all values of elements kij for i ≠ j of K(k) to the nearest values of the scale under consideration.
  • Step 7. Given all perturbed An(a), denoted as An(a)*, and perturbed K(k), denoted as K(k)*, compute their corresponding PVs an* and k* using the given PDM; i.e., PREV, SNCS, LLSM, and LSDM.
  • Step 8. Calculate the TPV w*(PDM) of size [m x 1] by the following procedure:
    w*x = k*1a*1, x + k*2a*2, x + …+ k*n a*n, x
  • Step 9. Calculate the SRCC for all w*(PDM) and w, as well as any specified quality characteristics of the approximation;—e.g., MAD, PCC, or other relative deviations—e.g., mean relative errors, denoted as
    M R E γ , χ w ( P D M ) , w = 1 m i = 1 m w i w i ( P D M ) w i
    or average relative ratios, the value of which is given in [30] , denoted as
    M R R γ , χ w ( P D M ) , w = 1 m i = 1 m w i ( P D M ) w i
  • Step 10. Repeat steps 4 to 9 χ times, with the sample size denoted asχ.
  • Step 11. Repeat steps 1 to 9 χ times, with the number of AHP models considered denoted asχ.
  • Step 12. Return the arithmetic averages of all approximation quality functions computed during all executions in Steps 10 and 11.
This time, the MCS scenario used was developed with new assumptions in mind. Therefore, not only the results obtained with a reciprocal PCM (RPCM) but also the results obtained with a non-reciprocal PCM (APCM) were considered. Although the AHP does not allow APCM in its structure, it seems reasonable to analyze its application to PCB problems; see, for example, [37,40,71]. It was also decided to introduce new intervals for perturbation factors in MCS and to apply their new PDs. Obviously, this time, the expected value of eij was also close to unity; i.e., the value of EV(e) ≈ 1. Although this requirement is relatively easy to fulfill based on an asymmetric interval for eij (relative to its effect on a particular PCM element), it is quite difficult to realize this assumption with a symmetric interval for eij. However, this goal was achieved in this seminal study. It is reasonable to apply symmetric intervals to MCS for eij as well, as they more realistically reflect real human performance in pairwise comparisons without strong outliers. Thus, experiments were conducted with different types of PDs, and it was found that the Fisher–Snedecor PD has a property that may be useful for the intended purpose. Namely, for n1 = 14 and n2 = 40 degrees of freedom for 1000 randomly generated numbers based on this PD, their mean is 1.03617, meaning that it is very close to one, and these numbers range from 0.174526 to 5.57826. Under these assumptions, e∈[0.174526, 5.57826] therefore holds, giving a completely symmetric PD for eij; i.e., EV(e) ≈ 1. The results for the selected PDMs and their assumed performance quality measures—i.e., mean relative error (MSRC), mean relative error (MARE), and mean relative ratio (MARR)—derived from the MCS scenario described earlier are shown in Table 10.
As can be seen again, PREV is not the dominant PDM in terms of all simulation scenarios in the established framework (it ranks second overall ex aequo with LLSM). Of course, the obvious differences in the quality of the PV approximation depending on the chosen PDMs are evident for non-reciprocal PDMs. LSDM and LLSM outperform the other chosen PDMs, especially with respect to rank correlations, which are crucial for the phenomenon of rank preservation.
The issue of the rank reversal phenomenon (conservation of preference power) was introduced by Bana e Costa and Vansnick [13]. They gave the following definition: for all alternatives A1, A2, A3 and A4 such that A1 dominates A2 and A3 dominates A4 and the degree of dominance of A1 over A2 is greater than the degree of dominance of A3 over A4, not only w1 > w2 and w3 > w4, but also w1/w2 > w3/w4 is true for the obtained PV. Therefore, the following scenario considered by Bana e Costa and Vansnick is revisited.
When the PCM is given as
1 2 3 5 9 1 / 2 1 2 4 9 1 / 3 1 / 2 1 2 8 1 / 5 1 / 4 1 / 2 1 7 1 / 9 1 / 9 1 / 8 1 / 7 1
according to the common linguistic interpretation of the AHP, A1 is strongly dominant over A4 (A1/A4 = 5) and A4 is very strongly dominant over A5 (A4/A5 = 7). This implies that A1/A4 < A4/A5.
However, the PV obtained from PREV gives [0.4262, 0.2809, 0.1652, 0.1008, 0.0269]T and gives A1/A4 = 4.218 > A4/A5 = 3.741, which violates the condition of order preservation (COP). On the other hand, the PV obtained, e.g., by LSDM gives [0.434659, 0.282449, 0.163602, 0.097671, 0,021620]T and results in ratios if A1/A4 = 4.450245 < A4/A5 = 4.517668, which, unlike PREV, satisfy the COP!
This phenomenon of LSDM is especially interesting because of the perfect rank correlations for PVs derived by LSDM and PREV from randomly generated (uniform probability distribution) transitive and reciprocal inconsistent PCMs (TRPCM) (see Table 11).
In order to compare the results obtained using LSDM with those obtained using PREV and to see if they are the same or if there is a possibility to reverse the order between their PV elements, 1000 TRPCMs were generated using MCS. For each randomly generated TRPCM, PVs were determined: PVLSDM and PVPREV were calculated using LSDM and PREV, respectively. In addition, the PCCs between the PV elements and the SRCCs between their priorities were calculated. The numbers of alternatives n considered were chosen as follows: 3, 4, 5, 6, 7, 8, 9, 10 and 12. The number of criteria was set to one. A standard numerical AHP scale was used to express the judgment; i.e., integers 1–9 and their inversions.
Table 11 shows the mean correlation coefficients between PV elements and the priority ranks obtained during MCS with respect to the number of alternatives taken into consideration.
Considering the results presented in Table 11, three facts can be noted: firstly, for n = 3, both methods perfectly coincide; secondly, MSRC values for n > 3 equal 1,meaning that there is no rank reversal phenomenon between the LSDM and PREV for 1000 randomly generated TRPCMs; lastly, MPCC values for n > 3 between PVs derived by the LSDM and PREV for 1000 TRPCMs practically coincide with unity—i.e., MPCC ≈ 1—which indicates the almost perfect coincidence of both PDMs.

5. Conclusions

Discrepancies and similarities among examined PDMs have been examined in this research paper from various perspectives, also including the statistical approach. For this purpose, selected statistical measures of the effectiveness of PDMs (approximation quality) have been applied in this research; i.e., MSRC, MPCC, MARE, MAAD and MARR. Information concerning the statistical significance of discrepancies and similarities among examined PDMs is clearly presented. In this way, to the best of our knowledge, for the first time, a statistical foundation has been created to identify situations in which PDMs coincide and their discrepancies can be considered as negligible, and when their discrepancies are statistically significant and they should not be neglected. Hence, the possibility was created for a DM to assess the risk of accepting an ineffective PDM or rejecting an effective PDM—the standard problem known to every statistician and which is very important to each DM during the statistical evaluation of decisional options; i.e., statistical alternative hypothesis testing. The ranking of the PDMs evaluated in the manuscript based on novel scenarios of Monte Carlo simulations was also presented in this research paper. Furthermore, a certain interesting advantage of LSDM which other evaluated PDMs do not have—i.e., the condition of order preservation satisfaction—is also presented in the article. These research accomplishments will certainly provide fundamental support for any DM during their decisions regarding which PDM to choose in various circumstances.
Given the reality of our physical world, no study is perfect. In order to compare the characteristics of the estimates obtained in the simulation process for the selected PDM, different situations related to various sources of the PCM inconsistency were simulated. Fundamentally, PCM inconsistency commonly results from errors caused by the nature of human judgments and errors due to the technical realization of the pairwise comparison procedure; i.e., rounding errors and errors resulting from the forced reciprocity requirement commonly imposed in PCB ranking problems. All the above errors can be simulated, but the nature of human judgments is represented here as the realization of a stochastic process in accordance with the assumed probability distribution of the perturbation factor; e.g., uniform, gamma, truncated normal and log-normal. As this is only a process generated by a computer, it represents a certain limitation of the presented research. Thus, there is a space for further research in this area with the application of different MCS scenarios, various other measures of PDM effectiveness (performance quality) and a case-based methodology.

Funding

This research received no external funding. The APC was funded by Opole University of Technology, Poland.

Data Availability Statement

Datasets were generated in Wolfram’s Mathematica Software 11 concurrently during the study. All statistics obtained during their analysis are an integral part of this research paper.

Acknowledgments

I am grateful for the effort of five anonymous reviewers who significantly improved the first and the second version of this research paper.

Conflicts of Interest

The author declares no conflict of interest. The APC funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Table A1. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of various uniformly drawn and uniformly perturbed AHP frameworks (%).
Table A1. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of various uniformly drawn and uniformly perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSM[8]SNCS[8]LSDM[9]
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.954114
0.996283
0.953954
0.996103
0.954394
0.996037
0.954034
0.996266
nk,na∈{8,9…,12}MSRC
MPCC
0.960275
0.997085
0.960486
0.996477
0.959488
0.996947
0.960366
0.997012
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.818026
0.919355
0.832331
0.925809
0.827549
0.927142
0.822583
0.920789
nk,na∈{8,9…,12}MSRC
MPCC
0.808539
0.914050
0.835546
0.930354
0.822548
0.925367
0.814658
0.916495
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.972260
0.998200
0.970506
0.997937
0.971797
0.997919
0.972263
0.998164
nk,na∈{8,9…,12}MSRC
MPCC
0.975308
0.998289
0.974428
0.997909
0.974223
0.998099
0.975294
0.998241
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.829677
0.926624
0.841497
0.937096
0.842137
0.938049
0.833560
0.929571
nk,na∈{8,9…,12}MSRC
MPCC
0.758181
0.896157
0.806346
0.929493
0.784396
0.919387
0.769094
0.904046
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.961512
0.997214
0.961167
0.997235
0.961672
0.997212
0.961424
0.997215
nk,na∈{8,9…,12}MSRC
MPCC
0.968680
0.998276
0.968997
0.998233
0.968652
0.998239
0.968741
0.998266
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.803089
0.917198
0.818714
0.928645
0.812574
0.929140
0.807100
0.920726
nk,na∈{8,9…,12}MSRC
MPCC
0.751877
0.897493
0.800270
0.932354
0.781048
0.923762
0.763255
0.906297
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, uniformly drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale is a simple ordinal scale from 1 to 50.
Table A2. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of various uniformly drawn and log-normally perturbed AHP frameworks (%).
Table A2. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of various uniformly drawn and log-normally perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSM[10]SNCS[10]LSDM[11]
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.953746
0.991004
0.953537
0.990507
0.953960
0.990439
0.953849
0.990924
nk,na∈{8,9…,12}MSRC
MPCC
0.942781
0.994939
0.941309
0.994112
0.941449
0.994609
0.942662
0.994809
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.809600
0.895606
0.812803
0.898055
0.810080
0.896622
0.811654
0.897116
nk,na∈{8,9…,12}MSRC
MPCC
0.736278
0.884172
0.739493
0.887095
0.737974
0.882575
0.738273
0.886230
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.949380
0.995827
0.950646
0.995758
0.950451
0.995706
0.949569
0.995818
nk,na∈{8,9…,12}MSRC
MPCC
0.965776
0.996819
0.966252
0.996391
0.965441
0.996552
0.965881
0.996769
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.809509
0.899603
0.815140
0.901646
0.811486
0.901534
0.811577
0.900910
nk,na∈{8,9…,12}MSRC
MPCC
0.762026
0.898035
0.771645
0.903934
0.767378
0.896692
0.766894
0.901356
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.954331
0.994703
0.954854
0.994667
0.954460
0.994533
0.954406
0.994705
nk,na∈{8,9…,12}MSRC
MPCC
0.942954
0.996367
0.944344
0.996354
0.943296
0.996285
0.943163
0.996360
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.799409
0.889681
0.802834
0.892092
0.800323
0.888281
0.802071
0.891404
nk,na∈{8,9…,12}MSRC
MPCC
0.763264
0.898773
0.766324
0.902773
0.763543
0.897047
0.766648
0.901440
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, log-normally drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale is a simple ordinal scale from 1 to 50.
Table A3. Performance evaluations of four arbitrarily selected distinctive priorities deriving methods for 25,000 cases of various uniformly drawn and truncated-normally perturbed AHP frameworks (%).
Table A3. Performance evaluations of four arbitrarily selected distinctive priorities deriving methods for 25,000 cases of various uniformly drawn and truncated-normally perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSM[9]SNCS[6]LSDM[10]
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.952060
0.997388
0.950040
0.997020
0.947580
0.996865
0.951749
0.997337
nk,na∈{8,9…,12}MSRC
MPCC
0.974105
0.997360
0.973690
0.996866
0.973285
0.997141
0.974142
0.997308
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.918383
0.988553
0.921563
0.988792
0.921223
0.988718
0.918700
0.988587
nk,na∈{8,9…,12}MSRC
MPCC
0.914576
0.989527
0.920314
0.989938
0.917226
0.989911
0.915278
0.989531
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.971405
0.998091
0.971627
0.997849
0.969968
0.997573
0.971434
0.998059
nk,na∈{8,9…,12}MSRC
MPCC
0.981538
0.999183
0.980776
0.998891
0.980626
0.999057
0.981576
0.999161
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.950677
0.987563
0.953594
0.988619
0.951866
0.988256
0.951023
0.987754
nk,na∈{8,9…,12}MSRC
MPCC
0.925444
0.989946
0.933470
0.991220
0.929917
0.990733
0.926256
0.990110
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.977257
0.998227
0.978649
0.998163
0.976406
0.998168
0.977389
0.998223
nk,na∈{8,9…,12}MSRC
MPCC
0.975885
0.998537
0.975917
0.998490
0.975790
0.998504
0.975878
0.998526
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.914943
0.981219
0.919191
0.982817
0.916900
0.982553
0.915900
0.981489
nk,na∈{8,9…,12}MSRC
MPCC
0.915798
0.989423
0.925127
0.991284
0.920089
0.990701
0.917169
0.989650
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, truncated-normally drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale is a simple ordinal scale from 1 to 50.
Table A4. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of various uniformly drawn and gamma perturbed AHP frameworks (%).
Table A4. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of various uniformly drawn and gamma perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSM[10]SNCS[10]LSDM[10]
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.948900
0.991729
0.947383
0.991345
0.949043
0.991349
0.948911
0.991679
nk,na∈{8,9…,12}MSRC
MPCC
0.940537
0.994642
0.940207
0.994043
0.939364
0.994399
0.940564
0.994550
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.700517
0.792365
0.716474
0.801692
0.713586
0.807358
0.704297
0.794835
nk,na∈{8,9…,12}MSRC
MPCC
0.628579
0.741102
0.656190
0.765560
0.649620
0.766790
0.633529
0.739974
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.958689
0.995120
0.959289
0.994878
0.958826
0.994857
0.958983
0.995092
nk,na∈{8,9…,12}MSRC
MPCC
0.960721
0.996408
0.961264
0.996086
0.960395
0.996157
0.960567
0.996362
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.710669
0.777665
0.729831
0.797508
0.733769
0.808600
0.717891
0.783750
nk,na∈{8,9…,12}MSRC
MPCC
0.633144
0.705974
0.681213
0.763865
0.667491
0.753209
0.647042
0.718163
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.938357
0.992896
0.938480
0.992939
0.939657
0.992925
0.938291
0.992902
nk,na∈{8,9…,12}MSRC
MPCC
0.949155
0.995949
0.949673
0.995938
0.949495
0.995841
0.949216
0.995942
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.652014
0.716426
0.685663
0.752258
0.689277
0.768266
0.667789
0.730457
nk,na∈{8,9…,12}MSRC
MPCC
0.517215
0.596455
0.596775
0.720996
0.579539
0.704673
0.549460
0.638277
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, gamma drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale is a simple ordinal scale from 1 to 50.
Table A5. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of different uniformly drawn and variously perturbed AHP frameworks (%).
Table A5. Performance evaluations of four arbitrarily selected distinctive priority deriving methods for 25,000 cases of different uniformly drawn and variously perturbed AHP frameworks (%).
Simulation Parameters($) PDMMAAD[1]MAAD[2]MAAD[3]MAAD[4]
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}PREV0.01521660.01707930.01572680.0183054
LLSM
SNCS
LSDM
0.0158009
0.0166151
0.0152939
0.0176242
0.0183194
0.0171737
0.0164453
0.0175465
0.0158418
0.0187979
0.0197925
0.0183912
nk,na∈{8,9…,12}PREV0.00676290.00898240.00647990.0080005
LLSM
SNCS
LSDM
0.0074658
0.0072654
0.0068723
0.0097037
0.0097470
0.0091221
0.0072243
0.0070307
0.0065773
0.0085362
0.0086307
0.0080944
eij∈[0.05,1.95]nk,na∈{3,4…,7}PREV0.04235000.05387140.02415950.0752394
LLSM
SNCS
LSDM
0.0411620
0.0435337
0.0420947
0.0532517
0.0561386
0.0535864
0.0244044
0.0260062
0.0242102
0.0734771
0.0760735
0.0747223
nk,na∈{8,9…,12}PREV0.02014250.02420490.01022240.0322768
LLSM
SNCS
LSDM
0.0187484
0.0200100
0.0199658
0.0238864
0.0248888
0.0240361
0.0104191
0.0107942
0.0102943
0.0309819
0.0312162
0.0321762
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}PREV0.01256620.01569830.00913650.0161565
LLSM
SNCS
LSDM
0.0133571
0.0141659
0.0127110
0.0160638
0.0168236
0.0157627
0.0096679
0.0102382
0.0092115
0.0164534
0.0172445
0.0161972
nk,na∈{8,9…,12}PREV0.00467420.00615510.00407770.0057862
LLSM
SNCS
LSDM
0.0052204
0.0051243
0.0047530
0.0067697
0.0068553
0.0062525
0.0047142
0.0044819
0.0041514
0.0062594
0.0063425
0.0058646
eij∈[0.05,1.95]nk,na∈{3,4…,7}PREV0.04312380.05191070.01750770.0697836
LLSM
SNCS
LSDM
0.0411438
0.0437114
0.0426094
0.0515763
0.0544794
0.0516975
0.0174912
0.0185578
0.0175154
0.0674779
0.0688967
0.0689917
nk,na∈{8,9…,12}PREV0.01956220.02095480.00792030.0318991
LLSM
SNCS
LSDM
0.0169201
0.0183839
0.0190334
0.0205455
0.0216180
0.0207261
0.0079355
0.0083508
0.0079570
0.0289979
0.0296353
0.0312078
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}PREV0.00856520.01261100.00730270.0124882
LLSM
SNCS
LSDM
0.0085662
0.0088237
0.0085692
0.0126199
0.0131611
0.0126112
0.0073812
0.0076280
0.0073205
0.0124307
0.0128388
0.0124823
nk,na∈{8,9…,12}PREV0.00359460.00507210.00306620.0050876
LLSM
SNCS
LSDM
0.0036694
0.0037865
0.0036145
0.0051352
0.0054186
0.0050970
0.0031548
0.0032253
0.0030948
0.0051000
0.0053887
0.0051038
eij∈[0.05,1.95]nk,na∈{3,4…,7}PREV0.04419690.05255530.01870250.0810866
LLSM
SNCS
LSDM
0.0418959
0.0440019
0.0436105
0.0521145
0.0553910
0.0523095
0.0183152
0.0190532
0.0186587
0.0770536
0.0775137
0.0795806
nk,na∈{8,9…,12}PREV0.01924930.02278270.00747840.0370230
LLSM
SNCS
LSDM
0.0161027
0.0175419
0.0185725
0.0224996
0.0236406
0.0225833
0.0069925
0.0075397
0.0074343
0.0312237
0.0317082
0.0349929
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, uniform MAAD[1], log-normal MAAD[2], truncated-normal MAAD[3], or gamma MAAD[4], drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. ($) PDM stands for “priorities deriving method”.

References

  1. Colomer, J.M. Ramon Llull: From ‘Ars Electionis’ to Social Choice Theory. Soc. Choice Welf. 2013, 40, 317–328. [Google Scholar] [CrossRef]
  2. Arrow, K.; Sen, A.K.; Suzumura, K. Handbook of Social Choice and Welfare; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  3. Fechner, G.T. Elemente der Psychophysik; Breitkopf und Härtel: Leipzig, Germany, 1860. [Google Scholar]
  4. Thurstone, L.L. A Law of Comparative Judgment. Psychol. Rev. 1927, 34, 273–286. [Google Scholar] [CrossRef]
  5. David, H.A. The Method of Paired Comparisons; Griffin, C., Ed.; Oxford University Press: London, UK; New York, NY, USA, 1988; ISBN 978-0-19-520616-6. [Google Scholar]
  6. Peterson, G.L.; Brown, T.C. Economic Valuation by the Method of Paired Comparison, with Emphasis on Evaluation of the Transitivity Axiom. Land Econ. 1998, 74, 240–261. [Google Scholar] [CrossRef]
  7. Saaty, T.L. The Analytic Hierarchy Process: Decision Making in Complex Environments. In Quantitative Assessment in Arms Control: Mathematical Modeling and Simulation in the Analysis of Arms Control Problems; Avenhaus, R., Huber, R.K., Eds.; Springer: Boston, MA, USA, 1984; pp. 285–308. ISBN 978-1-4613-2805-6. [Google Scholar]
  8. Saaty, T.L. Decision Making with the Analytic Hierarchy Process. IJSSCI 2008, 1, 83. [Google Scholar] [CrossRef] [Green Version]
  9. Zhao, H.; Yao, L.; Mei, G.; Liu, T.; Ning, Y. A Fuzzy Comprehensive Evaluation Method Based on AHP and Entropy for a Landslide Susceptibility Map. Entropy 2017, 19, 396. [Google Scholar] [CrossRef]
  10. Feng, G.; Lei, S.; Guo, Y.; Meng, B.; Jiang, Q. Optimization and Evaluation of Ventilation Mode in Marine Data Center Based on AHP-Entropy Weight. Entropy 2019, 21, 796. [Google Scholar] [CrossRef] [Green Version]
  11. Hodicky, J.; Özkan, G.; Özdemir, H.; Stodola, P.; Drozd, J.; Buck, W. Analytic Hierarchy Process (AHP)-Based Aggregation Mechanism for Resilience Measurement: NATO Aggregated Resilience Decision Support Model. Entropy 2020, 22, 1037. [Google Scholar] [CrossRef] [PubMed]
  12. Tomashevskii, I.L. Eigenvector Ranking Method as a Measuring Tool: Formulas for Errors. Eur. J. Oper. Res. 2015, 240, 774–780. [Google Scholar] [CrossRef]
  13. Bana e Costa, C.A.; Vansnick, J.-C. A Critical Analysis of the Eigenvalue Method Used to Derive Priorities in AHP. Eur. J. Oper. Res. 2008, 187, 1422–1428. [Google Scholar] [CrossRef]
  14. Koczkodaj, W.W.; Mikhailov, L.; Redlarski, G.; Soltys, M.; Szybowski, J.; Tamazian, G.; Wajch, E.; Yuen, K.K.F. Important Facts and Observations about Pairwise Comparisons (the Special Issue Edition). Fundam. Inform. 2016, 144, 291–307. [Google Scholar] [CrossRef] [Green Version]
  15. Genest, C.; Rivest, L.-P. A Statistical Look at Saaty’s Method of Estimating Pairwise Preferences Expressed on a Ratio Scale. J. Math. Psychol. 1994, 38, 477–496. [Google Scholar] [CrossRef]
  16. Basak, I. Comparison of Statistical Procedures in Analytic Hierarchy Process Using a Ranking Test. Math. Comput. Model. 1998, 28, 105–118. [Google Scholar] [CrossRef]
  17. Bozóki, S.; Dezső, L.; Poesz, A.; Temesi, J. Analysis of Pairwise Comparison Matrices: An Empirical Research. Ann Oper Res 2013, 211, 511–528. [Google Scholar] [CrossRef] [Green Version]
  18. Bryson, N. A Goal Programming Method for Generating Priority Vectors. J. Oper. Res. Soc. 1995, 46, 641–648. [Google Scholar] [CrossRef]
  19. Choo, E.U.; Wedley, W.C. A Common Framework for Deriving Preference Values from Pairwise Comparison Matrices. Comput. Oper. Res. 2004, 31, 893–908. [Google Scholar] [CrossRef]
  20. Cook, W.D.; Kress, M. Deriving Weights from Pairwise Comparison Ratio Matrices: An Axiomatic Approach. Eur. J. Oper. Res. 1988, 37, 355–362. [Google Scholar] [CrossRef]
  21. Crawford, G.; Williams, C. The Analysis of Subjective Judgment Matrices. Available online: https://www.rand.org/pubs/reports/R2572-1.html (accessed on 19 February 2020).
  22. Crawford, G.; Williams, C. A Note on the Analysis of Subjective Judgment Matrices. J. Math. Psychol. 1985, 29, 387–405. [Google Scholar] [CrossRef]
  23. Csató, L. Ranking by Pairwise Comparisons for Swiss-System Tournaments. Cent. Eur. J. Oper. Res. 2013, 21, 783–803. [Google Scholar] [CrossRef] [Green Version]
  24. Dijkstra, T.K. On the Extraction of Weights from Pairwise Comparison Matrices. Cent. Eur. J. Oper. Res. 2013, 21, 103–123. [Google Scholar] [CrossRef] [Green Version]
  25. Dong, Y.; Xu, Y.; Li, H.; Dai, M. A Comparative Study of the Numerical Scales and the Prioritization Methods in AHP. Eur. J. Oper. Res. 2008, 186, 229–242. [Google Scholar] [CrossRef]
  26. Farkas, A.; Rózsa, P. A Recursive Least-Squares Algorithm for Pairwise Comparison Matrices. Cent. Eur. J. Oper. Res. 2013, 21, 817–843. [Google Scholar] [CrossRef]
  27. Hosseinian, S.S.; Navidi, H.; Hajfathaliha, A. A New Linear Programming Method for Weights Generation and Group Decision Making in the Analytic Hierarchy Process. Group Decis. Negot. 2012, 21, 233–254. [Google Scholar] [CrossRef]
  28. Hovanov, N.V.; Kolari, J.W.; Sokolov, M.V. Deriving Weights from General Pairwise Comparison Matrices. Math. Soc. Sci. 2008, 55, 205–220. [Google Scholar] [CrossRef]
  29. Ishizaka, A.; Lusti, M. How to Derive Priorities in AHP: A Comparative Study. Cent. Eur. J. Oper. Res. 2006, 14, 387–400. [Google Scholar] [CrossRef] [Green Version]
  30. Kazibudzki, P.T. The Quality of Ranking during Simulated Pairwise Judgments for Examined Approximation Procedures. Model. Simul. Eng. 2019, 2019, e1683143. [Google Scholar] [CrossRef] [Green Version]
  31. Kou, G.; Ergu, D.; Chen, Y.; Lin, C. Pairwise Comparison Matrix in Multiple Criteria Decision Making. Technol. Econ. Dev. Econ. 2016, 22, 738–765. [Google Scholar] [CrossRef] [Green Version]
  32. Kou, G.; Lin, C. A Cosine Maximization Method for the Priority Vector Derivation in AHP. Eur. J. Oper. Res. 2014, 235, 225–232. [Google Scholar] [CrossRef] [Green Version]
  33. Kułakowski, K. A Heuristic Rating Estimation Algorithm for the Pairwise Comparisons Method. Cent. Eur. J. Oper. Res. 2015, 23, 187–203. [Google Scholar] [CrossRef] [Green Version]
  34. Kułakowski, K.; Mazurek, J.; Strada, M. On the Similarity between Ranking Vectors in the Pairwise Comparison Method. J. Oper. Res. Soc. 2021, 0, 1–10. [Google Scholar] [CrossRef]
  35. Lin, C.; Kou, G. A Heuristic Method to Rank the Alternatives in the AHP Synthesis. Appl. Soft Comput. 2020, 106916. [Google Scholar] [CrossRef]
  36. Lin, C.-C. A Revised Framework for Deriving Preference Values from Pairwise Comparison Matrices. Eur. J. Oper. Res. 2007, 176, 1145–1150. [Google Scholar] [CrossRef]
  37. Linares, P.; Lumbreras, S.; Santamaría, A.; Veiga, A. How Relevant Is the Lack of Reciprocity in Pairwise Comparisons? An Experiment with AHP. Ann. Oper. Res. 2016, 245, 227–244. [Google Scholar] [CrossRef]
  38. Mardani, A.; Jusoh, A.; Nor, K.M.; Khalifah, Z.; Zakwan, N.; Valipour, A. Multiple Criteria Decision-Making Techniques and Their Applications a Review of the Literature from 2000 to 2014. Econ. Res.-Ekon. Istraživanja 2015, 28, 516–571. [Google Scholar] [CrossRef]
  39. Mizuno, T. A Link Diagram for Pairwise Comparisons. In Proceedings of the Intelligent Decision Technologies 2018; Czarnowski, I., Howlett, R.J., Jain, L.C., Vlacic, L., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 181–186. [Google Scholar]
  40. Nishizawa, K. Non-Reciprocal Pairwise Comparisons and Solution Method in AHP. In Proceedings of the Intelligent Decision Technologies 2018; Czarnowski, I., Howlett, R.J., Jain, L.C., Vlacic, L., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 158–165. [Google Scholar]
  41. Orbán-Mihálykó, É.; Mihálykó, C.; Koltay, L. A Generalization of the Thurstone Method for Multiple Choice and Incomplete Paired Comparisons. Cent. Eur. J. Oper. Res. 2019, 27, 133–159. [Google Scholar] [CrossRef]
  42. Saaty, T.L.; Vargas, L.G. Comparison of Eigenvalue, Logarithmic Least Squares and Least Squares Methods in Estimating Ratios. Math. Model. 1984, 5, 309–324. [Google Scholar] [CrossRef] [Green Version]
  43. Saaty, T.L.; Vargas, L.G. The Possibility of Group Choice: Pairwise Comparisons and Merging Functions. Soc. Choice Welf. 2012, 38, 481–496. [Google Scholar] [CrossRef]
  44. Shiraishi, S.; Obata, T.; Daigo, M. Properties of a Positive Reciprocal Matrix and Their Application to Ahp. J. Oper. Res. Soc. Jpn. 1998, 41, 404–414. [Google Scholar] [CrossRef] [Green Version]
  45. Temesi, J. Pairwise Comparison Matrices and the Error-Free Property of the Decision Maker. Cent. Eur. J. Oper. Res. 2011, 19, 239–249. [Google Scholar] [CrossRef] [Green Version]
  46. Wang, H.; Peng, Y.; Kou, G. A Two-Stage Ranking Method to Minimize Ordinal Violation for Pairwise Comparisons. Appl. Soft Comput. 2021, 107287. [Google Scholar] [CrossRef]
  47. Zahedi, F. A Simulation Study of Estimation Methods in the Analytic Hierarchy Process. Socio-Econ. Plan. Sci. 1986, 20, 347–354. [Google Scholar] [CrossRef]
  48. Zhu, B.; Xu, Z.; Zhang, R.; Hong, M. Hesitant Analytic Hierarchy Process. Eur. J. Oper. Res. 2016, 250, 602–614. [Google Scholar] [CrossRef]
  49. Saaty, T.L. Decision-Making with the AHP: Why Is the Principal Eigenvector Necessary. Eur. J. Oper. Res. 2003, 145, 85–91. [Google Scholar] [CrossRef]
  50. Saaty, T.L. A Scaling Method for Priorities in Hierarchical Structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar] [CrossRef]
  51. Saaty, T.L. Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World; RWS Publ.: Pittsburgh, PA, USA, 2001; ISBN 978-0-9620317-8-6. [Google Scholar]
  52. Kazibudzki, P. Scenario Based Analysis of Logarithmic Utility Approach for Deriving Priority Vectors in Analytic Hierarchy Process. Sci. Res. Inst. Math. Comput. Sci. 2011, 10, 99–105. [Google Scholar]
  53. Faramondi, L.; Oliva, G.; Setola, R. Multi-Criteria Node Criticality Assessment Framework for Critical Infrastructure Networks. Int. J. Crit. Infrastruct. Prot. 2020, 28, 100338. [Google Scholar] [CrossRef]
  54. Aczél, J.; Saaty, T.L. Procedures for Synthesizing Ratio Judgements. J. Math. Psychol. 1983, 27, 93–102. [Google Scholar] [CrossRef]
  55. Saaty, T.L.; Hu, G. Ranking by Eigenvector versus Other Methods in the Analytic Hierarchy Process. Appl. Math. Lett. 1998, 11, 121–125. [Google Scholar] [CrossRef] [Green Version]
  56. Saaty, T.L. Relative Measurement and Its Generalization in Decision Making Why Pairwise Comparisons Are Central in Mathematics for the Measurement of Intangible Factors the Analytic Hierarchy/Network Process. Rev. R. Acad. Cien. Ser. A Mat. 2008, 102, 251–318. [Google Scholar] [CrossRef]
  57. Grzybowski, A.Z.; Starczewski, T. New Look at the Inconsistency Analysis in the Pairwise-Comparisons-Based Prioritization Problems. Expert Syst. Appl. 2020, 113549. [Google Scholar] [CrossRef]
  58. Garuti, C.; Salomon, V.A.P. Compatibility Indices Between Priority Vectors. IJAHP 2012, 4. [Google Scholar] [CrossRef]
  59. Peniwati, K. Group Decision Making: Drawing out and Reconciling Differences. IJAHP 2017, 9. [Google Scholar] [CrossRef]
  60. Zanakis, S.H.; Solomon, A.; Wishart, N.; Dublish, S. Multi-Attribute Decision Making: A Simulation Comparison of Select Methods. Eur. J. Oper. Res. 1998, 107, 507–529. [Google Scholar] [CrossRef]
  61. Emond, E.J.; Mason, D.W. A New Rank Correlation Coefficient with Application to the Consensus Ranking Problem. J. Multi-Criteria Decis. Anal. 2002, 11, 17–28. [Google Scholar] [CrossRef]
  62. Kazibudzki, P.T.; Grzybowski, A.Z. On Some Advancements within Certain Multicriteria Decision Making Support Methodology. AJBM 2013, 2, 143–154. [Google Scholar] [CrossRef] [Green Version]
  63. Kazibudzki, P. On Some Discoveries in the Field of Scientific Methods for Management within the Concept of Analytic Hierarchy Process. Int. J. Bus. Manag. 2013, 8, 22. [Google Scholar] [CrossRef] [Green Version]
  64. Grzybowski, A.Z. Note on a New Optimization Based Approach for Estimating Priority Weights and Related Consistency Index. Expert Syst. Appl. 2012, 39, 11699–11708. [Google Scholar] [CrossRef]
  65. Dong, Q.; Saaty, T.L. An Analytic Hierarchy Process Model of Group Consensus. J. Syst. Sci. Syst. Eng. 2014, 23, 362–374. [Google Scholar] [CrossRef]
  66. Franek, J.; Kresta, A. Judgment Scales and Consistency Measure in AHP. Procedia Econ. Finan. 2014, 12, 164–173. [Google Scholar] [CrossRef] [Green Version]
  67. Wu, H.; Leung, S.-O. Can Likert Scales Be Treated as Interval Scales?—A Simulation Study. J. Soc. Serv. Res. 2017, 43, 527–532. [Google Scholar] [CrossRef]
  68. Starczewski, T. Remarks on the Impact of the Adopted Scale on the Priority Estimation Quality. J. Appl. Math. Comput. Mech. 2017, 16, 105–116. [Google Scholar] [CrossRef] [Green Version]
  69. Starczewski, T. Remarks about Geometric Scale in the Analytic Hierarchy Process. J. Appl. Math. Comput. Mech. 2018, 17. [Google Scholar] [CrossRef]
  70. Grzybowski, A.Z.; Starczewski, T. Simulation Analysis of Prioritization Errors in the AHP and Their Relationship with an Adopted Judgement Scale. In Proceedings of the World Congress on Engineering and Computer Science, San Francisco, CA, USA, 23–25 October 2018. [Google Scholar]
  71. Linares, P. Are Inconsistent Decisions Better? An Experiment with Pairwise Comparisons. Eur. J. Opera. Res. 2009, 193, 492–498. [Google Scholar] [CrossRef]
  72. Grošelj, P.; Stirn, L.Z. Evaluation of Several Approaches for Deriving Weights in Fuzzy Group Analytic Hierarchy Process. J. Decis. Syst. 2018, 27, 217–226. [Google Scholar] [CrossRef]
  73. Grošelj, P.; Pezdevšek Malovrh, Š.; Zadnik Stirn, L. Methods Based on Data Envelopment Analysis for Deriving Group Priorities in Analytic Hierarchy Process. Cent. Eur. J. Oper. Res. 2011, 19, 267–284. [Google Scholar] [CrossRef]
  74. Grošelj, P.; Zadnik Stirn, L. Soft Consensus Model for the Group Fuzzy AHP Decision Making. Croatian Oper. Res. Rev. 2017, 8, 207–220. [Google Scholar] [CrossRef] [Green Version]
  75. Grošelj, P.; Zadnik Stirn, L. The Environmental Management Problem of Pohorje, Slovenia: A New Group Approach within ANP—SWOT Framework. J. Environ. Manag. 2015, 161, 106–112. [Google Scholar] [CrossRef]
  76. Leal, J.E. AHP-Express: A Simplified Version of the Analytical Hierarchy Process Method. MethodsX 2020, 7, 100748. [Google Scholar] [CrossRef]
  77. Kazibudzki, P. Comparison of Analytic Hierarchy Process and Some New Optimization Procedures for Ratio Scaling. Sci. Res. Ins. Math. Comput. Sci. 2011, 10, 101–108. [Google Scholar]
  78. Grzybowski, A.Z. Goal Programming Approach for Deriving Priority Vectors—Some New Ideas. Sci. Res. Ins. Math. Comput. Sci. 2010, 9, 17–27. [Google Scholar]
  79. Liu, F.; Zhang, W.-G.; Wang, Z.-X. A Goal Programming Model for Incomplete Interval Multiplicative Preference Relations and Its Application in Group Decision-Making. Eur. J. Opera. Res. 2012, 218, 747–754. [Google Scholar] [CrossRef]
  80. Schoner, B.; Wedley, W.C. Ambiguous Criteria Weights in AHP: Consequences and Solutions*. Decis. Sci. 1989, 20, 462–475. [Google Scholar] [CrossRef]
  81. Csató, L. Characterization of an Inconsistency Ranking for Pairwise Comparison Matrices. Ann. Oper. Res. 2018, 261, 155–165. [Google Scholar] [CrossRef] [Green Version]
  82. Karanik, M.; Gomez-Ruiz, J.A.; Peláez, J.I.; Bernal, R. Reliability of Ranking-Based Decision Methods: A New Perspective from the Alternatives’ Supremacy. Soft Comput. 2020. [Google Scholar] [CrossRef]
  83. Wu, Z.; Xu, J. A Consistency and Consensus Based Decision Support Model for Group Decision Making with Multiplicative Preference Relations. Decis. Support Syst. 2012, 52, 757–767. [Google Scholar] [CrossRef]
  84. Siraj, S.; Mikhailov, L.; Keane, J. A Heuristic Method to Rectify Intransitive Judgments in Pairwise Comparison Matrices. Eur. J. Opera. Res. 2012, 216, 420–428. [Google Scholar] [CrossRef]
  85. Waite, T.A. Preference for Oddity: Uniqueness Heuristic or Hierarchical Choice Process? Anim. Cogn. 2008, 11, 707–713. [Google Scholar] [CrossRef]
  86. Saaty, T.L.; Tran, L.T. On the Invalidity of Fuzzifying Numerical Judgments in the Analytic Hierarchy Process. Math. Comput. Model. 2007, 46, 962–975. [Google Scholar] [CrossRef]
  87. Saaty, T.L.; Vargas, L.G. The Legitimacy of Rank Reversal. Omega 1984, 12, 513–516. [Google Scholar] [CrossRef]
  88. Xu, W.-J.; Dong, Y.-C.; Xiao, W.-L. Is It Reasonable for Saaty’s Consistency Test in the Pairwise Comparison Method? In Proceedings of the 2008 ISECS International Colloquium on Computing, Communication, Control, and Management, Guangzhou, China, 3–4 August 2008.
  89. Budescu, D.V.; Zwick, R.; Rapoport, A. A Comparison of the Eigenvalue Method and The Geometric Mean Procedure for Ratio Scaling. Appl. Psychol. Measur. 1986, 10, 69–78. [Google Scholar] [CrossRef] [Green Version]
  90. Belton, V.; Gear, T. On a Short-Coming of Saaty’s Method of Analytic Hierarchies. Omega 1983, 11, 228–230. [Google Scholar] [CrossRef]
  91. Belton, V.; Gear, T. The Legitimacy of Rank Reversal—A Comment. Omega 1985, 13, 143–144. [Google Scholar] [CrossRef]
  92. Johnson, C.R.; Beine, W.B.; Wang, T.J. Right-Left Asymmetry in an Eigenvector Ranking Procedure. J. Math. Psychol. 1979, 19, 61–64. [Google Scholar] [CrossRef]
  93. Cavallo, B.; D’Apuzzo, L. A General Unified Framework for Pairwise Comparison Matrices in Multicriterial Methods. Int. J. Intell. Syst. 2009, 24, 377–398. [Google Scholar] [CrossRef] [Green Version]
  94. Eddy, Y.L.; Nazri, E.M.; Mahat, N.I. Identifying Relevant Predictor Variables for a Credit Scoring Model Using Compromised-Analytic Hierarchy Process (Compromised-AHP). ARBMS 2020, 20, 1–13. [Google Scholar] [CrossRef]
  95. Kułakowski, K.; Mazurek, J.; Ramík, J.; Soltys, M. When Is the Condition of Order Preservation Met? Eur. J. Oper. Res. 2019, 277, 248–254. [Google Scholar] [CrossRef] [Green Version]
  96. Starczewski, T. Relationship between Priority Ratios Disturbances and Priority Estimation Errors. J. Appl. Math. Comput. Mech. 2016, 15, 143–154. [Google Scholar] [CrossRef] [Green Version]
  97. Wedley, W.C.; Choo, E.U.; Wijnmalen, D.J.D. Efficacy Analysis of Ratios from Pairwise Comparisons. Fundam. Inform. 2016, 146, 321–338. [Google Scholar] [CrossRef]
  98. Bozóki, S.; Tsyganok, V. The (Logarithmic) Least Squares Optimality of the Arithmetic (Geometric) Mean of Weight Vectors Calculated from All Spanning Trees for Incomplete Additive (Multiplicative) Pairwise Comparison Matrices. Int. J. Gen. Syst. 2019, 48, 362–381. [Google Scholar] [CrossRef]
  99. Choo, E.U.; Wedley, W.C.; Wijnmalen, D.J.D. Mathematical Support for the Geometric Mean When Deriving a Consistent Matrix from a Pairwise Ratio Matrix. Fundam. Inform. 2016, 144, 263–278. [Google Scholar] [CrossRef]
  100. Fichtner, J. On Deriving Priority Vectors from Matrices of Pairwise Comparisons. Socio-Econ. Plan. Sci. 1986, 20, 341–345. [Google Scholar] [CrossRef]
  101. Bajwa, G.; Choo, E.U.; Wedley, W.C. Effectiveness Analysis of Deriving Priority Vectors from Reciprocal Pairwise Comparison Matrices. Asia Pac. J. Oper. Res. 2008, 25, 279–299. [Google Scholar] [CrossRef]
  102. Golany, B.; Kress, M. A Multicriteria Evaluation of Methods for Obtaining Weights from Ratio-Scale Matrices. Eur. J. Oper. Res. 1993, 69, 210–220. [Google Scholar] [CrossRef]
  103. Mazurek, J.; Perzina, R.; Ramík, J.; Bartl, D. A Numerical Comparison of the Sensitivity of the Geometric Mean Method, Eigenvalue Method, and Best–Worst Method. Mathematics 2021, 9, 554. [Google Scholar] [CrossRef]
  104. Lipovetsky, S. Global Priority Estimation in Multiperson Decision Making. J. Optim. Theory Appl. 2008, 140, 77. [Google Scholar] [CrossRef]
  105. Garuti, C.E. A Set Theory Justification of Garuti’s Compatibility Index. J. Multi-Criteria Decis. Anal. 2020, 27, 50–60. [Google Scholar] [CrossRef]
  106. Garuti, C. Measuring in Weighted Environments: Moving from Metric to Order Topology (Knowing When Close Really Means Close); IntechOpen: London, UK, 2016; ISBN 978-953-51-2561-7. [Google Scholar]
Table 1. Formulae for the examined PDM.
Table 1. Formulae for the examined PDM.
PDM NamePDM Formula
Logarithmic Least Squares Method—LLSM w i L L S M = j = 1 n a i j 1 / n / i = 1 n j = 1 n a i j 1 / n
Simple Normalized Column Sum—SNCS w i S N C S = 1 n j = 1 n a i j / k = 1 n a k j
Logarithmic Squared Deviations Minimization Method—LSDM w L S D M = min i = 1 n ln 2 j = 1 n a i j w j / n w i
Table 2. Formulae for the performance measures.
Table 2. Formulae for the performance measures.
Performance Measures NamesPerformance Measures Formulae
Mean Spearman Ranks Correlation Coefficient M S R C = 1 N t = 1 N 1 6 i = 1 n d i 2 / n n 2 1 t
Mean Pearson Correlation Coefficient M P C C = 1 N t = 1 N i = 1 n w i w ¯ v i v ¯ i = 1 n w i w ¯ 2 i = 1 n v i v ¯ 2 t
Mean Average Absolute Deviation M A A D = 1 N t = 1 N 1 n i = 1 n w i v i t
di—difference between the two ranks of the considered PVs’ respective elements, n—the number of examined elements within a single experiment, N—the number of experiment iterations; wi, vi—i-th elements of the respective PVs that are compared.
Table 3. Approximation quality of four PDMs for 10,000 iterations of the AHP random framework# with the application of rounding errors and reciprocity imposition errors.
Table 3. Approximation quality of four PDMs for 10,000 iterations of the AHP random framework# with the application of rounding errors and reciprocity imposition errors.
PDM NameMSRCMPCCMAAD
LLSM0.9624290.9975690.01083350
PREV0.9628810.9980050.00994892
SNCS0.9615480.9977400.01095650
LSDM0.9636190.9979640.01008080
# a random framework represents a uniformly drawn number of criteria (nk) and number of alternatives (na) in the single AHP model; in this scenario, n k ,   n a 5 ,   6 ,   7 ,   8 ,   9 .
Table 4. Approximation quality of four PDMs for 10,000 iterations of the AHP random framework #, i.e., n k ,   n a 5 ,   6 , 15 , with the application of rounding errors and reciprocity imposition errors.
Table 4. Approximation quality of four PDMs for 10,000 iterations of the AHP random framework #, i.e., n k ,   n a 5 ,   6 , 15 , with the application of rounding errors and reciprocity imposition errors.
PDM NameMSRCMPCCMAAD
LLSM0.9722700.9969950.00806526
PREV0.9723790.9977040.00724835
SNCS0.9718150.9974410.00788974
LSDM0.9725100.9976200.00736441
# a random framework represents a uniformly drawn number of criteria (nk) and number of alternatives (na) in the single AHP model.
Table 5. Absolute discrepancies of the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and uniformly perturbed AHP frameworks (%).
Table 5. Absolute discrepancies of the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and uniformly perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSMSNCSLSDM
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
Referential value located in Table A1.0.0001600.0002800.000080
0.0001800.0002460.000017
nk,na∈{8,9…,12}MSRC
MPCC
0.0002110.0007870.000091
0.0006080.0001380.000073
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0143050.0095230.004557
0.0064540.0077870.001434
nk,na∈{8,9…,12}MSRC
MPCC
0.0270070.0140090.006119
0.0163040.0113170.002445
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0017540.0004630.000003
0.0002630.0002810.000036
nk,na∈{8,9…,12}MSRC
MPCC
0.0008800.0010850.000014
0.0003800.0001900.000048
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0118200.0124600.003883
0.0104720.0114250.002947
nk,na∈{8,9…,12}MSRC
MPCC
0.0481650.0262150.010913
0.0333360.0232300.007889
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0003450.0001600.000088
0.0000210.0000020.000001
nk,na∈{8,9…,12}MSRC
MPCC
0.0003170.0000280.000061
0.0000430.0000370.000010
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0156250.0094850.004011
0.0114470.0119420.003528
nk,na∈{8,9…,12}MSRC
MPCC
0.0483930.0291710.011378
0.0348610.0262690.008804
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, uniformly drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale was a simple ordinal scale from 1 to 50.
Table 6. Absolute discrepancies of the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and log-normally perturbed AHP frameworks (%).
Table 6. Absolute discrepancies of the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and log-normally perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSMSNCSLSDM
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
Referential value located in Table A2.0.0002090.0002140.000103
0.0004970.0005650.000080
nk,na∈{8,9…,12}MSRC
MPCC
0.0014720.0013320.000119
0.0008270.0003300.000130
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0032030.0004800.002054
0.0024490.0010160.001510
nk,na∈{8,9…,12}MSRC
MPCC
0.0032150.0016960.001995
0.0029230.0015970.002058
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0012660.0010710.000189
0.0000690.0001210.000009
nk,na∈{8,9…,12}MSRC
MPCC
0.0004760.0003350.000105
0.0004280.0002670.000050
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0056310.0019770.002068
0.0020430.0019310.001307
nk,na∈{8,9…,12}MSRC
MPCC
0.0096190.0053520.004868
0.0058990.0013430.003321
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0005230.0001290.000075
0.0000360.0001700.000002
nk,na∈{8,9…,12}MSRC
MPCC
0.0013900.0003420.000209
0.0000130.0000820.000007
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0034250.0009140.002662
0.0024110.0014000.001723
nk,na∈{8,9…,12}MSRC
MPCC
0.0030600.0002790.003384
0.0040000.0017260.002667
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, log-normally drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale was a simple ordinal scale from 1 to 50.
Table 7. Absolute discrepancies in the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and truncated-normally perturbed AHP frameworks (%).
Table 7. Absolute discrepancies in the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and truncated-normally perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSMSNCSLSDM
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
Referential value located in Table A3.0.0020200.0044800.000311
0.0003680.0005230.000051
nk,na∈{8,9…,12}MSRC
MPCC
0.0004150.0008200.000037
0.0004940.0002190.000052
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0031800.0028400.000317
0.0002390.0001650.000034
nk,na∈{8,9…,12}MSRC
MPCC
0.0057380.0026500.000702
0.0004110.0003840.000004
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0002220.0014370.000029
0.0002420.0005180.000032
nk,na∈{8,9…,12}MSRC
MPCC
0.0007620.0009120.000038
0.0002920.0001260.000022
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0029170.0011890.000346
0.0010560.0006930.000191
nk,na∈{8,9…,12}MSRC
MPCC
0.0080260.0044730.000812
0.0012740.0007870.000164
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0013920.0008510.000132
0.0000640.0000590.000004
nk,na∈{8,9…,12}MSRC
MPCC
0.0000320.0000950.000007
0.0000470.0000330.000011
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0042480.0019570.000957
0.0015980.0013340.000270
nk,na∈{8,9…,12}MSRC
MPCC
0.0093290.0042910.001371
0.0018610.0012780.000227
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, truncated-normally drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale was a simple ordinal scale from 1 to 50.
Table 8. Absolute discrepancies in the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and gamma perturbed AHP frameworks (%).
Table 8. Absolute discrepancies in the performance of arbitrarily selected PDMs and PREV for 25,000 cases of various uniformly drawn and gamma perturbed AHP frameworks (%).
Simulation Parameters($) STATPREVLLSMSNCSLSDM
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
Referential value located in Table A4.0.0015170.0001430.000011
0.0003840.0003800.000050
nk,na∈{8,9…,12}MSRC
MPCC
0.0003300.0011730.000027
0.0005990.0002430.000092
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0159570.0130690.003780
0.0093270.0149930.002470
nk,na∈{8,9…,12}MSRC
MPCC
0.0276110.0210410.004950
0.0244580.0256880.001128
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0006000.0001370.000294
0.0002420.0002630.000028
nk,na∈{8,9…,12}MSRC
MPCC
0.0005430.0003260.000154
0.0003220.0002510.000046
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0191620.0231000.007222
0.0198430.0309350.006085
nk,na∈{8,9…,12}MSRC
MPCC
0.0480690.0343470.013898
0.0578910.0472350.012189
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}MSRC
MPCC
0.0001230.0013000.000066
0.0000430.0000290.000006
nk,na∈{8,9…,12}MSRC
MPCC
0.0005180.0003400.000061
0.0000110.0001080.000007
eij∈[0.05,1.95]nk,na∈{3,4…,7}MSRC
MPCC
0.0336490.0372630.015775
0.0358320.0518400.014031
nk,na∈{8,9…,12}MSRC
MPCC
0.0795600.0623240.032245
0.1245410.1082180.041822
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, gamma drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. The scenario assumed 100 perturbations of 250 distinctive AHP frameworks. ($) STAT stands for “statistics”. The selected exemplary scale was a simple ordinal scale from 1 to 50.
Table 9. Performance discrepancies of three arbitrarily selected PDMs in relation to PREV for 25,000 cases of different uniformly drawn and variously perturbed AHP frameworks (%).
Table 9. Performance discrepancies of three arbitrarily selected PDMs in relation to PREV for 25,000 cases of different uniformly drawn and variously perturbed AHP frameworks (%).
Simulation Parameters($) PDMMAAD[1]MAAD[2]MAAD[3]MAAD[4]
Saaty’s Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
0.0005840.0005450.0007190.000493
0.0013990.0012400.0018200.001487
0.0000770.0000940.0001150.000086
nk,na∈{8,9…,12}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
0.0007030.0007210.0007440.000536
0.0005030.0007650.0005510.000630
0.0001090.0001400.0000970.000094
eij∈[0.05,1.95]nk,na∈{3,4…,7}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
−0.001188−0.0006200.000245−0.001762
0.0011840.0022670.0018470.000834
−0.000255−0.0002850.000051−0.000517
nk,na∈{8,9…,12}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
−0.001394−0.0003190.000197−0.001295
−0.0001330.0006840.000572−0.001061
−0.000177−0.0001690.000072−0.000101
Geometric Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
0.0007910.0003660.0005310.000297
0.0016000.0011250.0011020.001088
0.0001450.0000640.0000750.000041
nk,na∈{8,9…,12}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
0.0005460.0006150.0006370.000473
0.0004500.0007000.0004040.000556
0.0000790.0000970.0000740.000078
eij∈[0.05,1.95]nk,na∈{3,4…,7}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
−0.001980−0.000334−0.000017−0.002306
0.0005880.0025690.001050−0.000887
−0.000514−0.0002130.000008−0.000792
nk,na∈{8,9…,12}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
−0.002642−0.0004090.000015−0.002901
−0.0011780.0006630.000431−0.002264
−0.000529−0.0002290.000037−0.000691
Selected Exemplary Scaleeij∈[0.75,1.25]nk,na∈{3,4…,7}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
0.0000010.0000090.000079−0.000058
0.0002590.0005500.0003250.000351
0.0000040.0000000.000018−0.000006
nk,na∈{8,9…,12}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
0.0000750.0000630.0000890.000012
0.0001920.0003470.0001590.000301
0.0000200.0000250.0000290.000016
eij∈[0.05,1.95]nk,na∈{3,4…,7}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
−0.002301−0.000441−0.000387−0.004033
−0.0001950.0028360.000351−0.003573
−0.000586−0.000246−0.000044−0.001506
nk,na∈{8,9…,12}PREVReferential value located in Table A5.
LLSM
SNCS
LSDM
−0.003147−0.000283−0.000486−0.005799
−0.0017070.0008580.000061−0.005315
−0.000677−0.000199−0.000044−0.002030
(%) All simulation scenarios imposed reciprocity conditions for every examined PCM within each AHP framework, uniform MAAD[1], log-normal MAAD[2], truncated-normal MAAD[3], or gamma MAAD[4], with drawn perturbation factors from the indicated interval and rounding errors connected with the assigned preference scale. A negative value of the particular MAAD for PDM indicates that it is smaller than the MAAD of PREV. ($) PDM stands for “priorities deriving method”.
Table 10. The average performance of the five selected named PDMs for various uniform constructions of 100,000 AHP models—1000 hypothetical decision problems perturbed 100 times each (*).
Table 10. The average performance of the five selected named PDMs for various uniform constructions of 100,000 AHP models—1000 hypothetical decision problems perturbed 100 times each (*).
Scenario DetailsPDMMARERanksMSRCRanksMARRRanks R a n k s .
Geometric Scalen, m∈{3, 4…, 7}RPCMLLSM0.12328830.91628111.0464626
PREV0.12303010.91505641.0454616
LSDM0.12304420.91548921.0469937
SNCS0.13292640.91522831.05865411
APCMLLSM0.10051110.93024231.0295337
PREV0.10152330.93016441.0293829
LSDM0.10065820.93096521.0292615
SNCS0.10868940.93102611.0431549
n, m∈{8, 9…, 12}RPCMLLSM0.07974830.93139611.0331937
PREV0.07911010.92826641.0311616
LSDM0.07932120.92881721.0317326
SNCS0.08622340.92879931.03935411
APCMLLSM0.06393630.94339321.0225238
PREV0.06273520.94239941.0207017
LSDM0.06175710.94459311.0210924
SNCS0.06898140.94276431.02879411
Saaty’s scalen, m∈{3, 4…, 7}RPCMLLSM0.14365030.91138111.0657837
PREV0.14296710.91115131.0649815
LSDM0.14306920.91134721.0652026
SNCS0.15569440.91073541.07850412
APCMLLSM0.11609510.92745511.0468124
PREV0.11699430.92695541.04705310
LSDM0.11633720.92712931.0465716
SNCS0.12715440.92739721.06051410
n, m∈{8, 9…, 12}RPCMLLSM0.10027930.91723111.0485637
PREV0.09808410.91583331.0463015
LSDM0.09864820.91624521.0469526
SNCS0.10667440.91563341.05424412
APCMLLSM0.07846430.93819221.0356338
PREV0.07700220.93783731.0342216
LSDM0.07676210.93966911.0346924
SNCS0.08430740.93779641.04125412
Total Ranks SumLLSM54PREV54LSDM44SNCS89
Score2213
Note: (*) AHP models constructed randomly (uniformly) for a given set of criteria and alternatives. The scenario assumes both factors: a disturbance factor obtained with the F-Snedecor probability for n1 = 14 and n2 = 40 degrees of freedom and round-off errors associated with a given scale (geometric or Saaty’s). Includes calculation of performance measures for reciprocal PCM (RPCM) or non-reciprocal PCM (APCM).
Table 11. Results of comparative studies concerning LSDM and PREV for 1000 RTPCMs.
Table 11. Results of comparative studies concerning LSDM and PREV for 1000 RTPCMs.
($) STATNumber of alternatives (n)
34567
MPCC10.9999990.9999970.9999930.999991
MSRC11111
($) STATNumber of alternatives (n)
89101112
MPCC0.9999890.9999910.9999870.9999970.999988
MSRC11111
($) STAT stands for “statistics”.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kazibudzki, P.T. On the Statistical Discrepancy and Affinity of Priority Vector Heuristics in Pairwise-Comparison-Based Methods. Entropy 2021, 23, 1150. https://doi.org/10.3390/e23091150

AMA Style

Kazibudzki PT. On the Statistical Discrepancy and Affinity of Priority Vector Heuristics in Pairwise-Comparison-Based Methods. Entropy. 2021; 23(9):1150. https://doi.org/10.3390/e23091150

Chicago/Turabian Style

Kazibudzki, Pawel Tadeusz. 2021. "On the Statistical Discrepancy and Affinity of Priority Vector Heuristics in Pairwise-Comparison-Based Methods" Entropy 23, no. 9: 1150. https://doi.org/10.3390/e23091150

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop