Next Article in Journal
Temperature-Dependent Kinetic Modeling of Nitrogen-Limited Batch Fermentation by Yeast Species
Previous Article in Journal
Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Consistency in the Three-Option Davidson Model: Bridging Pairwise Comparison Matrices and Stochastic Methods

by
Anna Tóth-Merényi
,
Csaba Mihálykó
*,†,
Éva Orbán-Mihálykó
and
László Gyarmati
Department of Mathematics, University of Pannonia, Egyetem u. 10., H-8200 Veszprém, Hungary
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(9), 1374; https://doi.org/10.3390/math13091374
Submission received: 21 February 2025 / Revised: 4 April 2025 / Accepted: 17 April 2025 / Published: 23 April 2025
(This article belongs to the Section D: Statistics and Operational Research)

Abstract

:
In this paper, data consistency in the three-option Davidson model is investigated. Starting out with the usual consistency definition in pairwise matrices-based methods, we examine its consequences. We formulate an equivalent statement based on the usual PCM-based consistency definition for evaluation results, which aligns with the statement found in the two-option model and establishes a connection between the evaluation results based on PCM and those obtained from the three-option Davidson model. The theoretical results are complemented by findings based on random simulations, through which we also demonstrate the connections: the optimal comparison structures are identical to those in the PCM-based methods and in the two-option Bradley–Terry model.

1. Introduction

Comparisons in pairs is a very useful method for collecting data. It is particularly beneficial in such cases where it is difficult to compare objects to a scale value but it is much easier to determine which is better or worse in relation to each other. Such decisions are much more reliable than the scale values. It is applied in various fields of life: risk analysis [1], psychology [2], decision making in business [3], marketing [4], group decision making [5], architecture [6], sports [7,8,9], politics [10], education [11], environmental studies [12], social sciences [13,14], artificial intelligence [15], and we could continue the list extensively [16,17]. The main goal is rating and ranking the compared objects based on the results of the comparisons. As the primary comparison results are relationships instead of numbers on a scale, the evaluation of paired comparison data requires special evaluation methods.
Two main branches of evaluation methods can be distinguished: the pairwise comparison matrices-based methods (PCMs) and the Thurstone motivated methods (THMMs). An outstanding member of the former family of methods is the Analytic Hierarchy Process (AHP), associated with Saaty’s name [18,19]. The starting point of PCMs is a real squared reciprocal symmetric matrix with dimension  n × n , where n is the number of objects to evaluate. Although the decisions between the objects are verbal (equal, better, much better, extremely better, etc.), these decisions are converted into numbers, and these numbers are the elements of the matrices. The conversion methods are different, starting from the usual 1, 3, 5, 7, and 9 elements and the reciprocal numbers to the ratios of the numbers of types of decisions [8]. The evaluation methods are also varied: the most commonly used is the eigenvector method (EM) [20], but the logarithmic least squares method (LLSM) is also frequently applied [21]. The result of the evaluation is a normalized vector with positive coordinates; it is usually called the priority vector. Its coordinates represent the weights (strengths) of the individual objects to be evaluated.
Another branch of the paired comparison models is the stochastic models. In this case, the relations themselves are not converted directly into numbers. Thurstone’s idea that latent probabilistic variables underlie the objects to be evaluated and that decisions depend on the current values of these probabilistic variables holds great potential. Thurstone dealt with the Gauss distribution allowing two options [22], and applied the least squares parameter estimation method. Later, logistic distribution was applied with maximum likelihood estimation (MLE) [23]. The number of options has been extended to three, allowing option ‘equal’ next to options ‘worse’, and ‘better’ [24,25]. More than 3 options were analyzed in [26], applying the least squares estimation method, and in [27], applying MLE and allowing a wide class of distributions.
It is always an exciting challenge in research to recognize the connections between models based on different foundations. In the area of research in the field of comparisons in pairs, it is an interesting question what the relationship between PCMs and stochastic paired comparison methods can be. Based on empirical evidence, a connection was presented between them in the publication [28]. Recently, a theoretical relationship was proved for the two-option Bradley–Terry model and PCM-based methods [29]. The key of the relation was the concept of consistency, i.e., the absence of contradiction among the data [19]. It has been proved that in the case of data consistency, the two-option Bradley–Terry model with MLE parameter estimation and PCMs with LLSM or eigenvector methods provide the same priority vectors. If data are inconsistent, then the optimal comparison structures are proved to be the same structures in the case of 4, 5, and 6 objects to evaluate. However, the relation between the three-option stochastic models and the PCM-based models has not been revealed. This gap is filled in the current paper in the case of a three-option stochastic model.
In this paper, we investigate a stochastic three-option model from the aspects of data consistency. This model is a modified version of the three-option Bradley–Terry model. The modification was performed by Davidson in [30] in order for Luce’s choice axiom to be satisfied in the stochastic model even in the case of three options. In the case of the Bradley–Terry model that allows two options (BT2), Luce’s choice axiom is satisfied. Luce’s choice axiom expresses that the probabilities of the comparison results between two objects depend only on their strengths and are not affected by other objects [31,32], but in the case of the Bradley–Terry model allowing three options in choices (BT3), it is not. However, in Davidson’s modified model, Luce’s choice axiom holds.
Our article is structured as follows. In Section 2, the investigated Davidson model is presented. In Section 3, the concept of consistency in PCM models and the concept of consistency in the Davidson model is detailed, along with their relationship, based on the theoretical considerations. The connection between the Davidson model and PCM-based models is also supported by the simulation results in Section 4. The methodology of the simulation is described in Section 4.1, and in Section 4.2 and Section 4.3, the results of the simulations are presented. Finally, the paper is closed with a brief summary.

2. The Investigated Model: Davidson Model

To ensure that Luce’s choice axiom is satisfied in the model, Davidson introduced a modified three-option model as follows [30].
Consider a vector  π ̲ = ( π 1 , , π n ) 0 < π i  that satisfies  i = 1 n π i = 1 . The probabilities of the options ‘worse’, ‘equal’, and ‘better’ are assumed to be
p i , j , 1 = P ( i is worse than j ) = π j π i + π j + ν π i π j ,
p i , j , 2 = P ( i is equal to j ) = ν π i π j π i + π j + ν π i π j ,
p i , j , 3 = P ( i is better than j ) = π i π i + π j + ν π i π j
with some
0 < ν .
It is easy to see that if  ν = 0 , we return to BT2.
One can check, that, due to (1)–(3),
p i , j , 2 p i , j , 1 · p i , j , 3 = ν .
In this model, the Luce’s choice axiom [31,32] is fulfilled as
p i , j , 3 p i , j , 1 = π i π j .
Data are contained in a three-dimensional matrix A. Its element  A i , j , k  represents the number of decisions according to the comparison of object i and object j, and the result of the decision belongs to option k ( k = 1  stands for ‘worse’,  k = 2  stands for ‘equal’ and  k = 3  stands for option ‘better’). Of course,  A i , i , k = 0  and  k = 1 , 2 , 3 , and due to the symmetry,  A j , i , 1 = A i , j , 3  and  A j , i , 2 = A i , j , 2 .
The likelihood function represents the probability of the data as a function of unknown parameters. Assuming an independent sample, it can be written as follows:
L ( A | ( π 1 , , π n , ν ) ) = k = 1 3 i = 1 n 1 j = i + 1 n ( p i , j , k ) A i , j , k .
The log-likelihood function is its logarithm
l o g L ( A | ( π 1 , , π n , ν ) ) = k = 1 3 i = 1 n 1 j = i + 1 n A i , j , k · l o g ( p i , j , k ) = k = 1 3 i = 1 n j = 1 ; j i n 0.5 · ( A i , j , k · l o g ( p i , j , k ) ) .
The maximum likelihood estimation of the parameters (MLEP) is the argument of the maximum value of the function (8), i.e.,
B ^ = ( π ̲ ^ , ν ^ ) = arg max π ̲ > 0 , ν > 0 , i = 1 n π i = 1 l o g L ( A | ( π 1 , , π n , ν ) ) .
Note that the parameters  π i  can be multiplied by a positive constant without changing (8). This multiplication possibility is eliminated by requiring that the sum of the coordinates equals 1. Moreover, if all data  A i , j , k  are multiplied by the same positive constant, the maximum value would also be multiplied by it, but the argument of the maximum value, i.e., the estimate MLEP, would not change.
To investigate the evaluability of the data, i.e., the existence of the maximum value and the uniqueness of its argument, we need the following graph definitions and theorem.
Definition 1
(graph of comparisons  G R C E ( 3 ) ). Let the objects to be evaluated be assigned the nodes of the graph. Let there be an edge between i and j ( i j ) if there is a comparison between i and j, i.e.,
k = 1 3 A i , j , k > 0 .
Let the set of edges be denoted by E. This graph is denoted by  G R C E ( 3 ) .
Definition 2
(directed graph  G R D I R A ( 3 ) ). Let the objects to be evaluated be assigned the nodes of the graph. Let there be a directed edge from i to j (i → j) if there is a decision that i is considered to be ’better’ than j, that is,  A i , j , 3 > 0 . Let there be bi-directed edges from i to j ( i j ) if i is considered equal to j according to a decision, that is  A i , j , 2 > 0 . This graph is denoted by  G R D I R A ( 3 ) .
Theorem 1
([33]). The maximum value of the log-likelihood function (8) exists and the argument is unique under the conditions  0 < π ̲ , 0 < ν  and  i = 1 n π i = 1 , if and only if data matrix A satisfies the following set of conditions:
(A)
There is at least one index pair  ( i , j )  for which  0 < A i , j , 2 .
(B)
For any partition S and  S ¯ S S ¯ = { 1 , 2 , , n } S S ¯ = , there is at least one element  i S  and  j S ¯ , for which  0 < A i , j , 2 , or there are two (not necessarily different) pairs  ( i 1 , j 1 ) and ( i 2 , j 2 ) i 1 , i 2 S j 1 , j 2 S ¯ , for which  0 < A i 1 , j 1 , 3  and  0 < A i 2 , j 2 , 1 .
(C)
With the graph definition given in Definition 2, there exists such a directed cycle  ( i 1 , i 2 , i 3 , , i l , i 1 ) , where  i k , k = 1 , 2 , , l  are nodes, in which the number of the directed ’better’ edges exceeds the number of the bi-directed ‘equal’ edges.
The proof of Theorem 1 can be found in [33].

3. Consistency and Its Theoretical Consequences

3.1. Consistency of Data in the Case of PCM-Based Methods

The concept of consistency and the degree of data inconsistency are central issues in PCMs. The most commonly used concept of consistency is the following: denote by  a i , j ( i = 1 , , n j = 1 , , n )  the elements of the (multiplicative) pairwise comparison matrix  a a i , j = 1 a j , i . If the comparison is complete, (every object is compared to any other object, i.e.,  0 < a i , j ,   i = 1 , , n j = 1 , , n ), then consistency means the following relations:
a i , j = a i , l · a l , j i = 1 , , n ; j = 1 , , n , l = 1 , , n .
In the case of incomplete comparisons, in matrix  a , the elements corresponding to the missing comparisons will remain empty. In such cases, the following generalized consistency definition is given [34]. For all cycles ( i 1 , , i l , i 1 ) in the graph of comparisons, the following equality holds:
a i 1 , i 2 · a i 2 , i 3 · · a i l , i 1 = 1 .
Even in complete and incomplete cases, assuming a consistent data matrix  a , the result of the evaluation is a priority vector  w ̲ = ( w 1 , , w n ) , with properties  0 < w i , i = 1 , , n , and  i n w i = 1 , then
a i , j = w i w j ,
and relation (13) guarantees the consistency.
Numerous metrics have been developed to measure inconsistency [18,35,36]. Recently, the case of incomplete comparisons has become the subject of research; see, for example, [37].

3.2. Consistency in Davidson Model

The concept of consistency is rarely examined in the case of the models with a stochastic background. In the publication [29], the two-option Bradley–Terry model was scrutinized, and the consistency of the data was defined within it. In the conference paper [38], the three-option Davidson model was investigated, and now these examinations will be extended.
First, we prove that if we start with a parameter set and we define data as the probabilities given by (1)–(3), moreover, the graph of comparisons  G R C E ( 3 )  is connected, then the maximum likelihood estimation of the parameters coincides with the starting parameter set.
Theorem 2.
Let  π ̲ ( 0 ) = ( π 1 ( 0 ) , , π n ( 0 ) )  be a starting initial vector with  0 < π i ( 0 )  and  i = 1 n π i ( 0 ) = 1 , and  0 < ν ( 0 ) . Let  p i , j , k ( 0 )  be computed from this parameter set  ( π ̲ ( 0 ) , ν ( 0 ) )  applying formulas (1), (2), and (3). Fix a graph of comparisons, incomplete or complete  G R C E ( 3 ) , the set of its edges is denoted by E. Let the data be defined as follows:
A i , j , k ( 0 ) = p i , j , k ( 0 ) , if ( i , j ) E 0 , otherwise
If the graph of comparisons  G R C E ( 3 )  is connected, then the log-likelihood function (8) attains its maximum value at  π ̲ ( 0 ) = ( π 1 ( 0 ) , , π n ( 0 ) ) , and  ν ( 0 ) , i.e.,
( π ^ ̲ , ν ^ ) = ( π ̲ ( 0 ) , ν ( 0 ) ) .
Proof. 
First of all, we note that if  ( i , j ) E , then  0 < p i , j , 1 ( 0 ) 0 < p i , j , 2 ( 0 ) , and  0 < p i , j , 3 ( 0 ) . In other words, for a fixed pair  ( i , j ) , all data  A i , j , k ( 0 )  for  k = 1 , 2 , 3  are positive, or all data  A i , j , k ( 0 )  for  k = 1 , 2 , 3  are equal to zero. Therefore, the connection of the comparison graph  G R C E ( 3 )  belonging to the data set  A ( 0 )  guarantees that conditions (A), (B), and (C) are satisfied. Consequently, if the data are defined by (14), the necessary and sufficient condition for a unique maximizer is the connectedness of the graph  G R C E ( 3 ) .
Let us define the following function:
F ( x 1 , 1 , x 1 , 2 , , x s , 1 , x s , 2 ) = s = 1 | E | B s , 1 · l o g ( x s , 1 ) + B s , 2 · l o g ( x s , 2 ) + + B s , 3 · l o g ( ( 1 x s , 1 x s , 2 ) ,
for the regions  0 < x s , 1 0 < x s , 2  and  x s , 1 + x s , 2 < 1 .  Here,  | E |  is the number of compared pairs, and there is a one-to-one correspondence between the indices s and the elements of the set of vertices  ( i , j )  in E. Moreover, in the log-likelihood function (8),  B s , k = A i , j , k ( 0 ) x s , 1 = p i , j , 1  and  x s , 2 = p i , j , 2 . To find the maximum value, take the partial derivative with respect to  x s , 1  and  x s , 2 s = 1 , , | E | , and we obtain
B s , 1 · 1 x s , 1 B s , 3 · 1 1 x s , 1 x s , 2 = 0 .
and
B s , 2 · 1 x s , 2 B s , 3 · 1 1 x s , 1 x s , 2 = 0 .
From (17) and (18), it follows that
x s , 1 = B s , 1 and x s , 2 = B s , 2 .
Using the Hesse matrix, we can check that at this point, the extreme value exists, and it is the maximum.
Now the question is whether there exists such a parameter set ( π ̲ ^ , ν ^ ) for which
x s , 1 = π ^ j π ^ i + π ^ j + ν ^ π ^ i π ^ j
and
x s , 2 = ν ^ π ^ i · π ^ j π ^ i + π ^ j + ν ^ π ^ i · π ^ j .
We know that the starting parameter set ( π ̲ ( 0 ) , ν ( 0 ) ) satisfies Equations (20) and (21). Moreover, as the argument of the maximum value is unique, we can conclude that
( π ^ ̲ , ν ^ ) = ( π ̲ ( 0 ) , ν ( 0 ) ) .
Now, we will show that the priority vectors obtained using the LLSM and EM methods with PCM match the evaluation results obtained in the Davidson model if the elements of the PC matrix are defined based on the probabilities from the previous statement.
Theorem 3.
Let us define the data by (14), and let the elements of the PCM  a  be
a i , j = A i , j , 3 ( 0 ) A i , j , 1 ( 0 ) = P ( ξ j < ξ i ) P ( ξ i < ξ j ) = π i ( 0 ) π j ( 0 ) , if ( i , j ) E .
Assume that the graph of comparison  G R C E ( 3 )  is connected. Then, the results of the evaluation of PCM  a  by LLSM and also by EM coincide with the MLEP in the Davidson model, i.e.,  π ̲ ( 0 ) .
Proof. 
Due to the definition of  a i , j , the PCM is consistent; therefore, if the result of the evaluation is  w ̲ ( 0 ) , then  a i , j = w i ( 0 ) w j ( 0 ) . This property holds for  π ̲ ( 0 ) . As the sums of the coordinates equal 1 in both cases, the equality of the vectors is guaranteed. □
The following statement expresses the equivalence between the concept of consistency used for PCMs (see (12)) and the data ratio written as the ratio of the coordinates of the priority vector in the Davidson model.
Theorem 4.
Let  E = { ( i , j ) : 0 < k = 1 3 A i , j , k }  and  G R C E ( 3 )  be the graph of comparisons defined by the edges in E. Suppose that  G R C E ( 3 )  is connected. Assume that the data  A i , j , k  satisfy the following requirements.
If  ( i , j ) E , then
0 < A i , j , k , k = 1 , 2 , 3 ,
and moreover,
A i , j , 2 = ν · A i , j , 1 · A i , j , 3
with a common value  0 < ν . Let us introduce the notation
h i , j = A i , j , 3 A i , j , 1 .
For every cycle  ( i 1 , i 2 , , i l , i 1 ) G R C E ( 3 )
h i 1 , i 2 · · h i l , i 1 = 1
holds if and only if
A i , j , 3 A i , j , 1 = π ^ i π ^ j .
for the MLEP  π ^ ̲  in (9), with coordinates  0 < π ^ i  and  i = 1 n π ^ i = 1 .
Proof. 
First, we prove that (27) implies (28).
The log-likelihood function (16) with  B s , k = A i , j , k  and  x s , k = p i , j , k  takes its extreme value at
x s , 1 = B s , 1 B s , 1 + B s , 2 + B s , 3 = A i , j , 1 A i , j , 1 + A i , j , 2 + A i , j , 3
and
x s , 3 = B s , 3 B s , 1 + B s , 2 + B s , 3 = A i , j , 3 A i , j , 1 + A i , j , 2 + A i , j , 3 .
It can be checked that the extreme value at this point is the maximum. We verify whether there exists such a vector  π ^ ̲ 0 < π ^ i i = 1 n π ^ i = 1 , and  0 < ν ^ ,
A i , j , 1 A i , j , 1 + A i , j , 2 + A i , j , 3 = π ^ j π ^ i + π ^ j + ν ^ π ^ i · π ^ j ,
A i , j , 2 A i , j , 1 + A i , j , 2 + A i , j , 3 = ν ^ π i ^ · π j ^ π ^ i + π ^ j + ν ^ π ^ i · π ^ j ,
A i , j , 3 A i , j , 1 + A i , j , 2 + A i , j , 3 = π ^ j π ^ i + π ^ j + ν ^ π ^ i · π ^ j .
To satisfy the equalities (31)–(33), the equalities
A i , j , 3 A i , j , 1 = π ^ i π ^ j
and
ν ^ = A i , j , 2 A i , j , 1 · A i , j , 3
have to be satisfied for every  ( i , j ) E . (35) is satisfied due to (25), that is,  ν ^ = ν  in (35).
Let us turn to (34). If  G R C E ( 3 )  is a spanning tree, then start from object 1, and suppose that  π ^ 1 1 = 1 . Now, walking along the edges in  G R C E ( 3 ) , the coordinate of  π ^ ̲ 1  can be defined by
π ^ j 1 = π ^ i 1 · A i , j , 1 A i , j , 3 .
There is no cycle in a spanning tree, therefore, every coordinate is reached once and only once. Taking
π ^ i = π ^ i 1 k = 1 n π ^ k 1
we obtain an appropriate vector in the case of a spanning tree.
Turn to the case where  G R C E ( 3 )  is not a spanning tree. Take an edge  ( i , j ) E . Let I be a spanning tree in  G R C E ( 3 ) I G R C E ( 3 ) . Define the parameter vector as in the case of spanning trees. If  ( i , j )  is an edge in the tree I, then (34) holds. If  ( i , j )  is not contained among the edges of I, then there exists a path from i to j in the spanning tree  G R C I ( 3 ) ; let it be ( i = i 1 , i 2 , , i l = j ) . Consider the cycle  ( i 1 , i 2 , , i l , i 1 ) . As (27) holds, and
A i 1 , i 2 , 3 A i 1 , i 2 , 1 = π ^ i 1 π ^ i 2
and so on, we can conclude that
π ^ i 1 π ^ i l · h i l , i 1 = 1
i.e.,
h j , i = π ^ j π ^ i .
So, one can see that (34) is true for every edge in  G R C E ( 3 ) .
Now, let us consider the opposite direction of the equivalence. If (28) holds, then multiply these values in the case of the edges in a cycle. After performing the simplifications, it is obvious that the product would be 1. □
Remark 1.
We draw attention to the fact that, if the graph of comparisons is connected and the data satisfy (24) and (25), as a consequence of (27), we obtain that the data are proportional to the probabilities
A i , j , 1 = K i , j · π ^ j π ^ i + π ^ j + ν ^ π ^ i · π ^ j ,
A i , j , 2 = K i , j · ν ^ π i ^ · π j ^ π ^ i + π ^ j + ν ^ π ^ i · π ^ j ,
A i , j , 3 = K i , j · π ^ i π ^ i + π ^ j + ν ^ π ^ i · π ^ j .
Here,  K i , j  is the number of comparisons between the objects i and j.
The presented analogy, the properties of PCM models, and the Davidson model are used to define data consistency in the Davidson model as follows.
Definition 3.
Data A are said to be consistent in the Davidson model if (24) and (25) are satisfied and with the notation (26), for every cycle  ( i 1 , i 2 , , i l , i 1 ) G R C E ( 3 ) .
h i 1 , i 2 · · h i l , i 1 = 1 .
Here, E is the set of pairs  ( i , j ) , where there is a comparison between i and j.
Data are inconsistent if they are not consistent.
Remark 2.
The requirement (25) requires some explanation. Since the geometric mean property holds for probabilities, and relative frequencies are close to probabilities in the case of many comparisons, requiring the geometric mean property for the data does not seem to be a very strict constraint.
Remark 3.
The equivalence of (27) and (28) coincides with the same results in the case of PCMs [34] and this result is true in a three-option model. Consequently, a connection can be established not only between two-option models and PCM-based models but also between the three-option Davidson model and PCM-based models through the concept of consistency given for incomplete PCM in [34]. Theorem 4 states a perfect analogy between the consistent BT2, PCMs, and three-option Davidson model through the priority vectors and through
A i , j , 3 A i , j , 1 = p i , j , 3 p i , j , 1 = P ( ξ j < ξ i ) P ( ξ i < ξ j ) = π ^ i π ^ j = w i w j
To summarize the similarities among the different models, see Table 1 below.
Finally, we present an interesting property of the consistent data in the Davidson model. This is the following: in the case of a consistent data set, if we multiply the data of any pairs in comparisons by a positive constant, the evaluation result will not change. It obviously holds in the case of PCMs too but is not true in the case of inconsistent data in the Davidson model. To demonstrate this, let us see the following examples. Consider data matrix  A ( 1 )  as follows (Table 2).
One can easily check that  A ( 1 )  is consistent with  ν = 2 . The result of the evaluation is
π ^ ̲ ( 1 ) = ( 0.0976 , 0.0976 , 0.0244 , 0.3902 , 0.3902 ) .
As a consequence of Theorem 4, the same vector will be the result of the evaluation if we consider
A i , j , k ( 2 ) = A i , j , k ( 1 ) , if ( i , j ) ( 1 , 3 ) 10 · A i , j , k ( 1 ) , if ( i , j ) = ( 1 , 3 )
We mention that in the case of consistent data, there is no need to reduce the effect of non-equal comparison numbers [39], as this has no impact on the evaluation result.
The PCM matrix, defined by
a i , j ( 1 ) = A i , j , 3 ( 1 ) A i , j , 1 ( 1 ) = A i , j , 3 ( 2 ) A i , j , 1 ( 2 )
is contained in Table 3. The results of the evaluation by LLSM and EM equal  π ^ ̲ ( 1 ) .
It is obvious that the PCM matrix does not change if data belonging to a fixed pair are multiplied by a positive constant, so the evaluation result by LLSM or EM does not change either.
Next, let  A ( 3 )  be another data matrix modified as follows:
A i , j , k ( 3 ) = A 1 , 3 , 1 ( 3 ) = 1 A 1 , 3 , 2 ( 3 ) = 2 A 1 , 3 , 3 ( 3 ) = 1 A i , j , k ( 1 ) , if ( i , j ) ( 1 , 3 ) .
In this case, the MLEP is
π ^ ̲ ( 3 ) = ( 0.0507 , 0.0925 , 0.0278 , 0.4145 , 0.4145 ) .
If
A i , j , k ( 4 ) = A i , j , k ( 3 ) , if ( i , j ) ( 1 , 3 ) 10 · A i , j , k ( 3 ) , if ( i , j ) = ( 1 , 3 ) ,
then
π ^ ̲ ( 4 ) = ( 0.0325 , 0.0879 , 0.0295 , 0.4251 , 0.4251 ) ,
which differs from  π ^ ̲ ( 3 ) . The data sets  A ( 3 )  and  A ( 4 )  are not consistent: the property (25) is satisfied, but (27) is not. If we define the PCM matrices as the ratios of the coordinates of  π ^ ̲ ( 3 )  and  π ^ ̲ ( 4 ) , we re-obtain  π ^ ̲ ( 3 )  and  π ^ ̲ ( 4 ) , respectively. If we define the PCM matrices by the ratios  A i , j , 3 ( 3 ) / A i , j , 1 ( 3 )  or  A i , j , 3 ( 4 ) / A i , j , 1 ( 4 ) , we do not obtain  π ^ ̲ ( 3 )  or  π ^ ̲ ( 4 ) .

4. Connections in the Case of Inconsistent Data, Based on Simulations

In Section 3, we presented the concept of consistent and inconsistent data in the case of PCMs and we defined the concept of consistent data in the case of the Davidson model. Moreover, based on theoretical considerations, we could set up a link between the PCMs and three-option Davidson model, and also BT2 and the three-option Davidson model in the case of consistent data. If data are inconsistent, this type of relation cannot be proved, but we can present some common features of the mentioned models by computer simulations.

4.1. Method of Simulations

Due to the complexity of the analytical calculations, we perform Monte Carlo simulations, similar to those in [29], to examine information retrieval from incomplete comparisons in the Davidson model. This section describes the simulation methodology.
The steps of the simulations are described as follows:
  • Generate a normalized random n-length  π ̲  vector, and assign a positive  ν  value. The coordinates of  π ̲  and  ν  are uniformly distributed random values in the interval  [ 0 , 1 ] . These will be the initial parameter values for the Davidson model.  π ̲  is referred to as the initial priority vector.
  • Calculate the probabilities of the comparison results from the initial values applying formulas (1)–(3) for every possible pair. So we generate a consistent and complete data set of the comparison results.
  • As we know from Theorem 2, in the case of consistent comparison data, MLE recovers the initial values in all (complete or incomplete) cases. For this reason, perturbations are performed on the consistent data set as follows: each probability value is modified by adding independent, uniformly distributed random numbers in the range [ ε ε ]. The  ε  value can be set between 0 and 1, but we use between 0 and 0.1. We guarantee that the resulting perturbed probability values are also between 0 and 1, then we normalize them by dividing their sum. These data serve as an inconsistent complete data set.
  • Using the above perturbed probability values as a data set, the estimated  π ^ ̲  vector and  ν ^  value are calculated using MLE. The  π ^ ̲  vector is referred to as an estimated priority vector based on the complete comparison and is denoted by  π ^ ̲ c o .
  • In the next step, we calculate the estimated priority vectors for different graph structures as follows. We omit data from the complete data set. For each fixed connected graph, we keep only the data that belong to the comparison structure associated with the graph. The remaining data set is incomplete and inconsistent. After performing MLE, the estimated priority vector is called the priority vector belonging to the incomplete data set and is denoted by  π ^ ̲ i n c . We want to determine how much information is retained from the initial priority vector, on the one hand, and from the estimated priority vector based on the complete comparison, on the other hand.
  • The differences between the priority vectors computed from the complete and incomplete data sets for the fixed graph structure are determined using various measures.
    To analyze the similarities of the rankings, we use two rank correlations and two additional distance measures:
    • Spearman  ρ  rank correlation
      ρ = 1 6 i = 1 n d i 2 n ( n 2 1 )
      where  d i  is the difference in rankings of the ith object;
    • Kendall  τ  rank correlation
      τ = 2 ( n C n D ) n ( n 1 )
      where  n C  is the number of concordant pairs, and  n D  is the number of discordant pairs.
    • Further used distances are the Pearson correlation coefficient
      P E = i = 1 n ( π ^ i n c , i π ^ ¯ ̲ i n c ) · ( π ^ c o , i π ^ ¯ ̲ c o ) ( i = 1 n ( π ^ i n c , i π ^ ¯ ̲ i n c ) 2 ) ) · ( i = 1 n ( π ^ c o , i ) π ^ ¯ ̲ c o ) 2 )
      where  π ^ i n c , i  is the i-th coordinate of the estimated priority vector from the incomplete comparison and  π ^ c o , i  is the i-th coordinate of the estimated priority vector from the complete comparison and the upper line denotes the arithmetic average of the coordinates.
      Each type of the correlation coefficients is in the interval  [ 1 , 1 ] . The closer the result is to 1, the more information is recovered about the rank or about the coordinates of the priority vector.
    • The Euclidean distance of the estimated parameter vectors
      E U = i = 1 n ( π ^ i n c , i π ^ c o , i ) 2 .
    The Euclidean distance is always non-negative. It can be larger than 1. Its largest value is  2 , if the Euclidean norms of the vectors equal 1. In this case, the smaller value represents better information retrieval.
  • It is also interesting to see how much information is retained from the initial priority vector in the case of different comparison structures. In this case, both the perturbation and the omission of part of the data may cause information loss. The same similarity measures as in the previous step are used for the initial parameter set and the estimated parameter vector based on incomplete comparison. More specifically, the same similarity measures as in Step 6 are calculated according to Formulae (53)–(56), but  π ^ ̲ c o  is substituted with  π ̲ .
  • Repeat the above-described steps N times, where N is the number of simulations. The similarity measures described in Step 6 are random, due to the random initial parameter vector and the random perturbation value. Therefore, we take their average over the simulations for each fixed comparison structure. These average values characterize the information retrieval measures associated with the fixed comparison structures.

4.2. Results of Simulations in the Case of Uniformly Distributed Random Perturbation Values

As in the article [29], the numbers of objects to be compared are n = 4, 5, and 6. For four objects, six different graphs can be distinguished, including the complete comparison. For five objects, this means 21 different graphs, while for six objects, 112 different graph structures exist. The number of simulations is  N = 10 6 . The random coordinates of the initial priority vector are uniformly distributed. The perturbation values are uniformly distributed random values from the intervals  [ 0.025 , 0.025 ] [ 0.05 , 0.05 ] [ 0.075 , 0.075 ] , and  [ 0.1 , 0.1 ] .
In summary, we can conclude that the same conclusions can be drawn in the three-option Davidson model as in PCMs and in the two-option Bradley–Terry model. We illustrate these observations with a few figures.
The independence of the rankings from the similarity measures can be observed as shown in Figure 1. We experience this for all investigated ranges of the random perturbation values. This is true even if the similarity measure is computed from the initial parameter vector and the estimated parameter vector based on complete comparison. This observation confirms that most of the information retrieval is related to the graph structure.
It is important to highlight that the closer the result is to one, the more information is recovered, except for Euclidean distance, where this relationship is reversed.
We examine how the similarity measures change as a function of the range of random perturbation values when the graph structure is fixed. We find that as perturbation increases, Euclidean distances become larger, while correlations become smaller as expected. The presentation of the data for the star-graph illustrates this observation as shown in Figure 2.
In the case of  n = 4  and  n = 5 , Table 4 and Table 5 present the computed average similarity measures for each comparison (graph) structure with uniformly distributed random perturbation values in  [ 0.1 , 0.1 ] . The rows represent the graph structures, and the columns represent the similarity measures, with the left side showing the comparison to the estimated priority vector from the complete comparison data, and the right side showing the comparison to the initial priority vector. We can establish that every similarity measure designates the same structure as the best in the case of each comparison structure if we compare the priority vector from incomplete comparison either to the initial priority vector or to the one estimated based on the complete comparison data. In the case of the optimal comparison structures the graph structures are also represented.
The same conclusions can be stated if  n = 6 . Table 6 contains the data only for the optimal graph structures, as the number of comparison structures is high. It is a very important observation that the optimal comparison structures are the same in the Davidson model, PCMs, and the BT2 model for n = 4, 5, and 6. This supports the idea that the optimal comparison structures do not depend on either the model selection or the number of options.
The simulations show that when using the best comparison structure for a given number of edges, a higher number of edges leads to better information retrieval. This holds true for all object numbers (n = 4, 5, and 6), all ranges of the uniformly distributed perturbation values, and all tested measurements.
For fixed edge numbers, Figure 3 shows the Pearson correlation and Euclidean distance belonging to the optimal structures, comparing both to the initial parameter vector and estimated parameter vector from complete comparison. Although the values differ, the tendencies coincide. It is clearly seen that the correlation values increase monotonically, while the Euclidean distances decrease monotonically as a function of the number of edges as expected. On the right side of Figure 3, the topmost value in blue is obviously zero.
As it was already established in the previous research for the BT2 model [29], although the best graph structures with k edges are always weaker than the best graph structures with  k + 1  edges, we can still find cases where the best graph structure with k edges is better than the worst graph structure with  k + 1  edges. For the Davidson model, we observe similar behavior in the data. Figure 4 illustrates this for each indicator by comparing the best graph with seven edges ( G 30 ) and the worst graph with eight edges ( G 57 ). The situation is the same if we compare estimated parameters arising from the incomplete and complete data, or we compare the estimated parameters arising from the incomplete comparisons to the initial parameter vector. This observation is highly significant because it supports the idea that the structure of the comparisons used to collect data matters. Therefore, using optimal comparison structures is worthwhile for better information retrieval.
We also test what can be stated for additional object numbers. Based on the simulations, we observe that for  n = 7 , 8 , 9  objects and  n 1  compared pairs, the star graph always proves to be the best structure, while for n compared pairs, the best structure is the cycle. We suggest using these structures for  9 < n  as well.

4.3. Results of Simulations in the Case of Gaussian-Distributed Random Perturbation Values

In order to make our model more realistic, we also perform the perturbation applying Gaussian distribution. The only modification in the simulation process is made in step 3: uniformly distributed random values are changed to Gaussian-distributed random values.
In the case of  n = 5 , Table 7 presents the computed average similarity measures for each comparison (graph) structure applying Gaussian-distributed perturbation values in  [ 0.1 , 0.1 ] . First, similarly to the uniformly distributed random perturbed values, the expectation of the perturbations is zero, but the dispersion is  1 / 30 . In this case, the possible values of the Gaussian-distributed random perturbation values are almost always (i.e., with probability 0.997) in the interval  [ 0.1 , 0.1 ] . The results are contained in Table 7.
The structure of Table 7 is the same as the structure of Table 5. The observations coincide with the observations in the case of uniformly distributed random perturbation, but the similarity measures are better. More precisely, Euclidean distances are smaller, and correlations are higher than in the case of uniformly distributed perturbations. In each number of compared pairs, the optimal comparison structures coincide, regardless of whether the perturbations are Gaussian-distributed or uniformly distributed random values. This observation also holds in the cases of  n = 4 , 6  objects.
We also investigate the case where the perturbation follows a certain tendency: its expectation is increased from zero, while the dispersion is kept at 1/30. Table 8 presents the results in the case of Gaussian-distributed random perturbed values with mean 0.01 and standard deviation 1/30. Comparing the measurements in Table 7 and Table 8, we can see that, although the distances are usually larger and the correlations are smaller, the optimal graph structures do not change if we increase the expectation of the perturbation from zero to a positive value. Every conclusion drawn in Section 4.2 can also be repeated in the case of Gaussian-distributed perturbation with nonzero expectation. This supports the robustness of optimal graph structures.

5. Summary

The main finding of the paper is a relation between two main branches of paired comparison methods: a three-option paired comparison model with stochastic background and the multiplicative pairwise comparison matrix-based model. The key moment was the definition of consistency in PCM models. Starting with Luce’s choice axiom, we characterized the data consistency in the Davidson model. We proved that, similarly to the two-option Bradley–Terry model, in the case of consistent data in the Davidson model, the priority vector coincides with the priority vector provided by PCM, and also with the priority vector in BT2. This is not true in the case of inconsistent data. If the data are not consistent, investigating the comparison structures by simulations, we established that the same conclusions can be drawn in the case of Davidson model and the two-option Bradley–Terry model: the optimal graph structures coincide with those investigating PCMs, and these structures form a sequence presented in [29,40]. This is true both when comparing the evaluation results derived from incomplete comparisons after the perturbation to the initial priority vector, and when comparing the evaluation results derived from complete and incomplete comparisons after the perturbation, independently of the distribution of the perturbed values. Therefore, the optimal information retrieval structure does not depend on the number of options and the model.
Based on simulations for  n = 4 , 5 , 6 , 7 , 8 , 9 , we observed that the best comparison structure is the star graph when only  n 1  pairs are allowed for comparison, and the cycle when n pairs are allowed. Therefore, we suggest using these comparison structures when dealing with a large number n of objects and  n 1  or n compared pairs. Further investigations would be performed in the case of the three-option Bradley–Terry and Thurstone–Mosteller models when Luce’s choice axiom is not fulfilled.

Author Contributions

Conceptualization, C.M. and É.O.-M.; methodology, A.T.-M. and L.G.; software, A.T.-M. and L.G.; validation, C.M.; writing—original draft preparation, É.O.-M. and A.T.-M. writing—review and editing, A.T.-M. and É.O.-M.; visualization, A.T.-M.; supervision, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All of the data are generated randomly by computer.

Acknowledgments

This work has been implemented by the TKP2021-NVA-10 project with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the 2021 Thematic Excellence Programme funding scheme. The authors are grateful to Zsombor Szádoczki for providing them with the equivalent graph structures.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PCMPaired Comparison Matrices-based Method
THMMThurstone motivated method
AHPAnalytic Hierarchy Process
EMEigenvector method
LLSMlogarithmic least squares method
MLEmaximum likelihood estimation
BT2Bradley–Terry model that allows two options
BT3Bradley–Terry model allowing three options in choices
MLEPmaximum likelihood estimate of the parameters
GRCgraph of comparisons
GRDIRdirected graph

References

  1. Mangla, S.K.; Kumar, P.; Barua, M.K. Risk analysis in green supply chain using fuzzy AHP approach: A case study. Resour. Conserv. Recycl. 2015, 104, 375–390. [Google Scholar] [CrossRef]
  2. Shi, W. Construction and evaluation of college students’ psychological quality evaluation model based on Analytic Hierarchy Process. J. Sens. 2022, 2022, 3896304. [Google Scholar] [CrossRef]
  3. Koczkodaj, W.W.; LeBrasseur, R.; Wassilew, A.; Tadeusziewicz, R. About business decision making by a consistency-driven pairwise comparisons method. J. Appl. Comput. Sci. 2009, 17, 55–70. [Google Scholar]
  4. Wind, Y.; Saaty, T.L. Marketing applications of the analytic hierarchy process. Manag. Sci. 1980, 26, 641–658. [Google Scholar] [CrossRef]
  5. Brunelli, M. A study on the anonymity of pairwise comparisons in group decision making. Eur. J. Oper. Res. 2019, 279, 502–510. [Google Scholar] [CrossRef]
  6. Amorocho, J.A.P.; Hartmann, T. A multi-criteria decision-making framework for residential building renovation using pairwise comparison and TOPSIS methods. J. Build. Eng. 2022, 53, 104596. [Google Scholar] [CrossRef]
  7. Baker, R.D.; McHale, I.G. A dynamic paired comparisons model: Who is the greatest tennis player? Eur. J. Oper. Res. 2014, 236, 677–684. [Google Scholar] [CrossRef]
  8. Bozóki, S.; Csató, L.; Temesi, J. An application of incomplete pairwise comparison matrices for ranking top tennis players. Eur. J. Oper. Res. 2016, 248, 211–218. [Google Scholar] [CrossRef]
  9. Baker, R.D.; McHale, I.G. Estimating age-dependent performance in paired comparisons competitions: Application to snooker. J. Quant. Anal. Sports 2024, 20, 113–125. [Google Scholar] [CrossRef]
  10. Gisselquist, R.M. Paired comparison and theory development: Considerations for case selection. PS Polit. Sci. Polit. 2014, 47, 477–484. [Google Scholar] [CrossRef]
  11. Crompvoets, E.A.; Béguin, A.A.; Sijtsma, K. Adaptive pairwise comparison for educational measurement. J. Educ. Behav. Stat. 2020, 45, 316–338. [Google Scholar] [CrossRef]
  12. Williamson, T.B.; Watson, D.O.T. Assessment of community preference rankings of potential environmental effects of climate change using the method of paired comparisons. Clim. Change 2010, 99, 589–612. [Google Scholar] [CrossRef]
  13. Verschuren, P.; Arts, B. Quantifying influence in complex decision making by means of paired comparisons. Qual. Quant. 2005, 38, 495–516. [Google Scholar] [CrossRef]
  14. Tarricone, P.; Newhouse, C.P. A study of the use of pairwise comparison in the context of social online moderation. Aust. Educ. Res. 2016, 43, 273–288. [Google Scholar] [CrossRef]
  15. Oliveira, I.F.; Ailon, N.; Davidov, O. A new and flexible approach to the analysis of paired comparison data. J. Mach. Learn. Res. 2018, 19, 1–29. [Google Scholar]
  16. Saaty, T.L.; Vargas, L.G. The Logic of Priorities: Applications of Business, Energy, Health and Transportation; Springer Science & Business Media: Dordrecht, The Netherlands, 2013. [Google Scholar]
  17. Vaidya, O.S.; Kumar, S. Analytic hierarchy process: An overview of applications. Eur. J. Oper. Res. 2006, 169, 1–29. [Google Scholar] [CrossRef]
  18. Saaty, T.L. The Analytic Hierarchy Process: Planning, Priority Setting, Resource, Allocation; McGraw-Hill: New York, NY, USA, 1980. [Google Scholar]
  19. Saaty, T.L. How to make a decision: The analytic hierarchy process. Eur. J. Oper. Res. 1990, 48, 9–26. [Google Scholar] [CrossRef]
  20. Saaty, T.L. Decision-making with the AHP: Why is the principal eigenvector necessary. Eur. J. Oper. Res. 2003, 145, 85–91. [Google Scholar] [CrossRef]
  21. Bozóki, S.; Fülöp, J.; Rónyai, L. On optimal completion of incomplete pairwise comparison matrices. Math. Comput. Model. 2010, 52, 318–333. [Google Scholar] [CrossRef]
  22. Thurstone, L.L. A law of comparative judgment. Psychol. Rev. 1927, 34, 273–286. [Google Scholar] [CrossRef]
  23. Bradley, R.A.; Terry, M.E. Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika 1952, 39, 324–345. [Google Scholar] [CrossRef]
  24. Glenn, W.A.; David, H.A. Ties in paired-comparison experiments using a modified Thurstone-Mosteller model. Biometrics 1960, 16, 86–109. [Google Scholar] [CrossRef]
  25. Rao, P.V.; Kupper, L.L. Ties in paired-comparison experiments: A generalization of the Bradley-Terry model. J. Am. Stat. Assoc. 1967, 62, 194–204. [Google Scholar] [CrossRef]
  26. Agresti, A. Analysis of ordinal paired comparison data. J. R. Stat. Soc. Ser. C Appl. Stat. 1992, 41, 287–297. [Google Scholar] [CrossRef]
  27. Orbán-Mihálykó, É.; Mihálykó, C.; Koltay, L. Incomplete paired comparisons in case of multiple choice and general log-concave probability density functions. Cent. Eur. J. Oper. Res. 2019, 27, 515–532. [Google Scholar] [CrossRef]
  28. Orbán-Mihálykó, É.; Koltay, L.; Szabó, F.; Csuti, P.; Kéri, R.; Schanda, J. A new statistical method for ranking of light sources based on subjective points of view. Acta Polytech. Hung. 2015, 12, 195–214. [Google Scholar]
  29. Gyarmati, L.; Orbán-Mihálykó, É.; Mihálykó, C.; Szádoczki, Z.; Bozóki, S. The incomplete Analytic Hierarchy Process and Bradley–Terry model: (In)consistency and information retrieval. Expert Syst. Appl. 2023, 229, 120522. [Google Scholar] [CrossRef]
  30. Davidson, R.R. On Extending the Bradley-Terry Model to Accommodate Ties in Paired Comparison Experiments. J. Am. Stat. Assoc. 1970, 65, 317–328. [Google Scholar] [CrossRef]
  31. Luce, R.D. Individual Choice Behavior; Wiley: New York, NY, USA, 1959; Volume 4. [Google Scholar]
  32. Luce, R.D. The choice axiom after twenty years. J. Math. Psychol. 1977, 15, 215–233. [Google Scholar] [CrossRef]
  33. Mihálykó, C.; Gyarmati, L.; Orbán-Mihálykó, É.; Mihálykó, A. Evaluability of paired comparison data in stochastic paired comparison models: Necessary and sufficient condition. arXiv 2025, arXiv:2502.13617. [Google Scholar]
  34. Bozóki, S.; Tsyganok, V. The (logarithmic) least squares optimality of the arithmetic (geometric) mean of weight vectors calculated from all spanning trees for incomplete additive (multiplicative) pairwise comparison matrices. Int. J. Gen. Syst. 2019, 48, 362–381. [Google Scholar] [CrossRef]
  35. Kazibudzki, P.T. Redefinition of triad’s inconsistency and its impact on the consistency measurement of pairwise comparison matrix. J. Appl. Math. Comput. Mech. 2016, 15, 71–78. [Google Scholar] [CrossRef]
  36. Brunelli, M. A survey of inconsistency indices for pairwise comparisons. Int. J. Gen. Syst. 2018, 47, 751–771. [Google Scholar] [CrossRef]
  37. Ágoston, K.C.; Csató, L. Inconsistency thresholds for incomplete pairwise comparison matrices. Omega 2022, 108, 102576. [Google Scholar] [CrossRef]
  38. Mihálykó, C.; Orbán-Mihálykó, É.; Gyarmati, L. Consistency and Inconsistency in the Case of a Stochastic Paired Comparison Model. In Proceedings of the 2024 IEEE 3rd Conference on Information Technology and Data Science (CITDS), Debrecen, Hungary, 26–28 August 2024; pp. 1–6. [Google Scholar] [CrossRef]
  39. Chartier, T.P.; Harris, J.; Hutson, K.R.; Langville, A.N.; Martin, D.; Wessell, C.D. Reducing the effects of unequal number of games on rankings. IMAGE Bull. Int. Linear Algebra Soc. 2014, 52, 15–23. [Google Scholar]
  40. Bozóki, S.; Szádoczki, Z. Optimal sequences for pairwise comparisons: The graph of graphs approach. arXiv 2022, arXiv:2205.08673. [Google Scholar]
Figure 1. Average correlations defined by (53) (Spearman), (54) (Kendall), (55) (Pearson), and Euclidean average distance defined by (56) for  n = 5  objects, with uniformly distributed perturbation values in  [ 0.05 , 0.05 ]  applied. The difference in the comparison of the priority vector computed from the perturbed complete data set is shown on the left, and from the initial priority vector on the right.
Figure 1. Average correlations defined by (53) (Spearman), (54) (Kendall), (55) (Pearson), and Euclidean average distance defined by (56) for  n = 5  objects, with uniformly distributed perturbation values in  [ 0.05 , 0.05 ]  applied. The difference in the comparison of the priority vector computed from the perturbed complete data set is shown on the left, and from the initial priority vector on the right.
Mathematics 13 01374 g001
Figure 2. The Euclidean distance (on the left) and correlation values (on the right) for the star-graph are presented for different ranges of the random perturbation values, comparing the initial priority vector to the estimated priority vector from the complete data set, for  n = 5  objects.
Figure 2. The Euclidean distance (on the left) and correlation values (on the right) for the star-graph are presented for different ranges of the random perturbation values, comparing the initial priority vector to the estimated priority vector from the complete data set, for  n = 5  objects.
Mathematics 13 01374 g002
Figure 3. Pearson correlation and Euclidean distance between the estimated priority vector belonging to the optimal graph structures and belonging to the complete graph and to the initial priority vector in the function of the number of graph’s edges in the case of n = 6 objects and random perturbation values uniformly distributed from the interval  [ 0.05 , 0.05 ] .
Figure 3. Pearson correlation and Euclidean distance between the estimated priority vector belonging to the optimal graph structures and belonging to the complete graph and to the initial priority vector in the function of the number of graph’s edges in the case of n = 6 objects and random perturbation values uniformly distributed from the interval  [ 0.05 , 0.05 ] .
Mathematics 13 01374 g003
Figure 4. Comparison of the best graph structure with 7 edges ( G 30 ) and the worst graph structure with 8 edges ( G 57 ) in the case of n = 6 objects to compare and random perturbation values, uniformly distributed in  [ 0.05 , 0.05 ] . Differences in comparison with the estimated priority vector based on complete data on the left, differences from the initial priority vector on the right.
Figure 4. Comparison of the best graph structure with 7 edges ( G 30 ) and the worst graph structure with 8 edges ( G 57 ) in the case of n = 6 objects to compare and random perturbation values, uniformly distributed in  [ 0.05 , 0.05 ] . Differences in comparison with the estimated priority vector based on complete data on the left, differences from the initial priority vector on the right.
Mathematics 13 01374 g004
Table 1. Similarities between paired comparison models concerning the concept of consistency.
Table 1. Similarities between paired comparison models concerning the concept of consistency.
PCMBT2 ModelThree-Option Davidson Model
a i , j = 1 c i r c l e h i , j = A i , j , 2 A i , j , 1 h i , j = A i , j , 3 A i , j , 1
h i , j = 1 c i r c l e h i , j = 1 c i r c l e and  A i , j , 2 = ν A i , j , 1 · A i , j , 3
a i , j = π i π j A i , j , 2 A i , j , 1 = π i π j = p i , j , 2 p i , j , 1 A i , j , 3 A i , j , 1 = π i π j = p i , j , 3 p i , j , 1 p i , j , 2 = ν p i , j , 1 · p i , j , 3
= P ( i is better than j ) P ( j is better than i ) = P ( i is better than j ) P ( j is better than i )
Table 2. Data matrix  A ( 1 ) .
Table 2. Data matrix  A ( 1 ) .
Pairs A i , j , 1 ( 1 ) A i , j , 2 ( 1 ) A i , j , 3 ( 1 )
(1,2)121
(1,3)144
(1,4)000
(1,5)000
(2,3)144
(2,4)441
(2,5)000
(3,4)1681
(3,5)000
(4,5)121
Table 3. PCM data matrix defined by the ratios of data belonging to options ‘better’ and ‘worse’ from the data of Table 2. * denotes that there is no comparison between the objects.
Table 3. PCM data matrix defined by the ratios of data belonging to options ‘better’ and ‘worse’ from the data of Table 2. * denotes that there is no comparison between the objects.
i / j 12345
1114**
21140.25*
30.250.251 0.0625 *
4*41611
5***11
Table 4. Average similarity measures when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for  n = 4  objects and uniformly distributed random perturbation values in  [ 0.1 , 0.1 ] .
Table 4. Average similarity measures when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for  n = 4  objects and uniformly distributed random perturbation values in  [ 0.1 , 0.1 ] .
Incomp.  π ^ ̲ inc  Versus Comp.  π ^ ̲ co Incomp.  π ^ ̲ inc  Versus Initial  π ̲
ID|E|Graph ρ τ PEEU ρ τ PEEU
13Mathematics 13 01374 i0010.8450.7860.8900.1070.7920.7210.8400.136
23 0.8350.7740.8800.1140.7840.7110.8320.140
34 0.8920.8450.9350.0750.8280.7630.8810.111
44Mathematics 13 01374 i0020.9120.8690.9590.0600.8450.7830.9040.097
55Mathematics 13 01374 i0030.9460.9180.9790.0370.8630.8050.9200.086
66Mathematics 13 01374 i00411100.8810.8290.9370.074
Table 5. Average similarity measures when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 5 objects and uniformly distributed random perturbation values in  [ 0.1 , 0.1 ] .
Table 5. Average similarity measures when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 5 objects and uniformly distributed random perturbation values in  [ 0.1 , 0.1 ] .
Incomp.  π ^ ̲ inc  Versus Comp.  π ^ ̲ co Incomp.  π ^ ̲ inc  Versus Initial  π ̲
ID|E|Graph ρ τ PEEU ρ τ PEEU
14Mathematics 13 01374 i0050.8390.7590.8730.1190.7990.7100.8350.139
24 0.8270.7460.8600.1260.7890.6980.8230.145
34 0.8170.7350.8500.1310.7800.6890.8140.150
45 0.8680.7970.9040.0970.8240.7390.8640.121
55 0.8780.8090.9180.0890.8330.7500.8770.113
65 0.8640.7920.8990.1010.8200.7350.8600.124
75 0.8550.7820.8910.1050.8120.7270.8520.127
85Mathematics 13 01374 i0060.8890.8220.9330.0800.8430.7620.8920.105
96 0.8970.8370.9340.0760.8480.7690.8920.104
106Mathematics 13 01374 i0070.9160.8600.9580.0610.8650.7890.9140.091
116 0.8990.8380.9390.0730.8500.7710.8970.101
126 0.9110.8530.9520.0650.8600.7840.9090.094
136 0.8940.8330.9300.0790.8450.7650.8890.106
147 0.9300.8830.9650.0530.8720.7990.9210.086
157 0.9280.8800.9660.0530.8720.8000.9210.086
167 0.9150.8640.9470.0650.8590.7840.9040.096
177Mathematics 13 01374 i0080.9340.8870.9720.0490.8770.8060.9270.082
188 0.9500.9130.9790.0380.8840.8160.9340.078
198Mathematics 13 01374 i0090.9520.9150.9840.0360.8880.8210.9380.075
209Mathematics 13 01374 i0100.9720.9500.9920.0220.8960.8310.9450.070
2110Mathematics 13 01374 i01111100.9040.8420.9520.064
Table 6. Optimal comparison structures and the average similarity measures belonging to them when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 6 objects and uniformly distributed random perturbation values in  [ 0.1 , 0.1 ] .
Table 6. Optimal comparison structures and the average similarity measures belonging to them when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 6 objects and uniformly distributed random perturbation values in  [ 0.1 , 0.1 ] .
Incomp.  π ^ ̲ inc  Versus Comp.  π ^ ̲ co Incomp.  π ^ ̲ inc  Versus Initial  π ̲
ID|E|Graph ρ τ PEEU ρ τ PEEU
15Mathematics 13 01374 i0120.8370.7430.8600.1230.8050.7030.8280.139
196Mathematics 13 01374 i0130.8740.7890.9100.0920.8390.7420.8760.110
307Mathematics 13 01374 i0140.9000.8250.9370.0750.8610.7710.9010.096
548Mathematics 13 01374 i0150.9170.8490.9530.0630.8750.7900.9170.086
739Mathematics 13 01374 i0160.9340.8760.9690.0510.8890.8090.9320.077
8510Mathematics 13 01374 i0170.9430.8900.9750.0450.8950.8170.9370.073
10311Mathematics 13 01374 i0180.9520.9050.9810.0380.9010.8250.9430.069
10812Mathematics 13 01374 i0190.9620.9230.9880.0310.9080.8350.9500.065
11013Mathematics 13 01374 i0200.9710.9410.9920.0240.9120.8410.9530.062
11114Mathematics 13 01374 i0210.9840.9660.9960.0150.9160.8470.9570.060
11215Mathematics 13 01374 i02211100.9200.8530.9610.057
Table 7. Average similarity measures when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 5 objects and Gaussian-distributed random perturbation values with expectation 0 and dispersion 1/30.
Table 7. Average similarity measures when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 5 objects and Gaussian-distributed random perturbation values with expectation 0 and dispersion 1/30.
Incomp.  π ^ ̲ inc  Versus Comp.  π ^ ̲ co Incomp.  π ^ ̲ inc  Versus Initial  π ̲
ID|E|Graph ρ τ PEEU ρ τ PEEU
14Mathematics 13 01374 i0230.9320.8850.9680.0520.9220.8690.9610.058
24 0.9240.8740.9590.0600.9130.8580.9510.066
34 0.9170.8650.9510.0660.9060.8490.9420.073
45 0.9440.9030.9760.0430.9310.8830.9690.051
55 0.9480.9090.9800.0400.9340.8870.9720.049
65 0.9410.8990.9740.0460.9280.8790.9660.054
75 0.9360.8920.9680.0510.9230.8720.9600.059
85Mathematics 13 01374 i0240.9530.9160.9850.0360.9380.8920.9770.046
96 0.9570.9230.9850.0340.9410.8980.9770.043
106Mathematics 13 01374 i0250.9650.9370.9920.0260.9480.9090.9840.037
116 0.9560.9230.9860.0330.9410.8980.9780.043
126 0.9620.9320.9900.0290.9450.9040.9820.040
136 0.9540.9200.9830.0360.9380.8940.9750.046
147 0.9710.9470.9930.0220.9520.9140.9860.035
157 0.9700.9450.9930.0230.9510.9130.9850.036
167 0.9640.9360.9870.0290.9450.9050.9800.041
177Mathematics 13 01374 i0260.9720.9480.9940.0220.9520.9150.9860.035
188 0.9790.9600.9960.0160.9560.9210.9880.032
198Mathematics 13 01374 i0270.9800.9620.9970.0150.9580.9230.9890.031
209Mathematics 13 01374 i0280.9880.9780.9990.0090.9610.9290.9910.029
2110Mathematics 13 01374 i02911100.9640.9330.9920.027
Table 8. Average similarity measures, when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 5 objects and Gaussian-distributed random perturbation values with expectation 0.01 and dispersion 1/30.
Table 8. Average similarity measures, when the distance is related to the priority vector from the complete comparison data and to the initial priority vector for n = 5 objects and Gaussian-distributed random perturbation values with expectation 0.01 and dispersion 1/30.
Incomp.  π ^ ̲ inc  Versus Comp.  π ^ ̲ co Incomp.  π ^ ̲ inc  Versus Initial  π ̲
ID|E|Graph ρ τ PEEU ρ τ PEEU
14Mathematics 13 01374 i0300.9320.8850.9690.0500.9220.8700.9620.056
24 0.9250.8750.9610.0570.9140.8590.9530.064
34 0.9170.8650.9530.0640.9060.8490.9440.071
45 0.9440.9030.9770.0410.9310.8830.9700.050
55 0.9480.9100.9810.0380.9350.8880.9730.047
65 0.9420.9000.9750.0440.9290.8800.9670.052
75 0.9360.8920.9690.0490.9230.8720.9610.057
85Mathematics 13 01374 i0310.9530.9160.9860.0350.9380.8930.9770.045
96 0.9570.9240.9850.0320.9420.8990.9780.042
106Mathematics 13 01374 i0320.9660.9370.9920.0250.9490.9090.9840.037
116 0.9560.9230.9860.0320.9410.8990.9780.042
126 0.9620.9320.9900.0280.9460.9040.9820.039
136 0.9540.9200.9830.0340.9390.8950.9750.045
147 0.9710.9470.9930.0220.9520.9150.9860.034
157 0.9700.9450.9930.0220.9510.9130.9850.035
167 0.9640.9360.9880.0280.9460.9060.9800.040
177Mathematics 13 01374 i0330.9720.9490.9940.0210.9530.9150.9860.035
188 0.9790.9600.9960.0160.9570.9220.9880.031
198Mathematics 13 01374 i0340.9800.9620.9970.0150.9580.9240.9890.031
209Mathematics 13 01374 i0350.9880.9780.9990.0090.9610.9290.9910.028
2110Mathematics 13 01374 i03611100.9640.9340.9920.027
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tóth-Merényi, A.; Mihálykó, C.; Orbán-Mihálykó, É.; Gyarmati, L. Exploring Consistency in the Three-Option Davidson Model: Bridging Pairwise Comparison Matrices and Stochastic Methods. Mathematics 2025, 13, 1374. https://doi.org/10.3390/math13091374

AMA Style

Tóth-Merényi A, Mihálykó C, Orbán-Mihálykó É, Gyarmati L. Exploring Consistency in the Three-Option Davidson Model: Bridging Pairwise Comparison Matrices and Stochastic Methods. Mathematics. 2025; 13(9):1374. https://doi.org/10.3390/math13091374

Chicago/Turabian Style

Tóth-Merényi, Anna, Csaba Mihálykó, Éva Orbán-Mihálykó, and László Gyarmati. 2025. "Exploring Consistency in the Three-Option Davidson Model: Bridging Pairwise Comparison Matrices and Stochastic Methods" Mathematics 13, no. 9: 1374. https://doi.org/10.3390/math13091374

APA Style

Tóth-Merényi, A., Mihálykó, C., Orbán-Mihálykó, É., & Gyarmati, L. (2025). Exploring Consistency in the Three-Option Davidson Model: Bridging Pairwise Comparison Matrices and Stochastic Methods. Mathematics, 13(9), 1374. https://doi.org/10.3390/math13091374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop