*3.1. Stability Check Algorithm for the AHP Method*

The quantitative evaluation of the stability and the criterion for evaluating the stability depend upon the specific problem to be solved. Thus, for the evaluation of the stabilities of the method for the MCDM model, we can apply the percentage of loss as the best alternative to the leading position, the maximum inconsistencies of the evaluations of the method, the percentage of variation of the order for the ranking of the alternatives, etc. [20].

The stability *δ* is evaluated in this paper as the maximum relative error of the criteria weights:

$$\delta = \max\_{1 \le j \le m; 1 \le \xi, \le \Upsilon} \frac{\left| \omega\_j^{(\xi)} - \omega\_j \right|}{\omega\_j},\tag{9}$$

where *ω<sup>j</sup>* is the weight of the *j*-th criterion, *ω* (*ξ*) *j* is the weight of the *j*-th criterion obtained as the result of the simulation, *ξ* is the simulation number, 1 ≤ *ξ* ≤ *T*, and *T* is the number of simulations.

The stability of the AHP method itself is understood as follows.

The Saaty scale for the AHP method applies only integer evaluation numbers *pij* from 1 to 9, showing by how many times one (the *i*-th) criterion is more important than the other (the *j*-th). With respect to the main diagonal, the symmetric evaluations are *1/ pij*—where numbers that are less than 1 show by how many times the second (*j*-th) criterion is less important than the first (*i*-th) one. To check the stability of the AHP method itself, we expand the evaluation scale and assume that any real numbers can act as evaluations. That would allow variation in a random manner of the evaluations provided by each of the experts, using the method of statistical simulation (Monte Carlo) to simulate the evaluations within the limits of certain intervals, performing a consistency check of the evaluations each time, and recording the weight variation intervals. The statistical modeling method (Monte Carlo) allows reproducing a real situation on a computer many times. This cannot be replicated in practice, or implementation may require significant resources and time.

Why did Saaty suggest an integer number scale and not expand the scale to the set of real numbers? In the latter case, it would be sufficient to compare the importance of one criterion only (for example, the most important one) in relation to all the other criteria, that is, to fill in one column (or row) only. It would then immediately be possible to fill in all the remaining rows (or columns) of the matrix. The elements would be pro rata with the elements of the one filled-in column.

Considering that the criteria weights are related to the eigenvector of the comparison matrix, it is important to determine how small variations of the matrix elements affect the values of the eigenvector elements and, correspondingly, influence the values of the criteria weights—the normalized values of the eigenvector.

The stability check algorithm for the AHP method can be represented in the following manner.

Step 1. The matrix P (*k*) for pairwise comparison of the criteria of one of the experts (k = 1) is selected. The consistency of the evaluations (*CR* < 0.1) is verified. The criteria weights Ω = (*ω<sup>j</sup>* ), *j* = 1, 2, . . . , *m* are calculated.

Step 2. The percentage *q* of inconsistency *p*ˆ (*k*) *ij* of all the elements *p* (*k*) *ij* (*i* 6= *j*) with the expert evaluations (*q* = 5%, *q* = 10%) is determined. Therefore, the random simulated values of the evaluations *p* (*k*) *ij* vary within the following interval *p*ˆ (*k*) *ij* ∈ [*p* (*k*) *ij* – *p* (*k*) *ij q* <sup>100</sup> , *p* (*k*) *ij* + *p* (*k*) *ij q* <sup>100</sup> ]. The elements of the main diagonal remain unchanged: *p* (*k*) *ii* = 1.

We vary the values of the integer numbers only (the evaluations) *p* (*k*) *ij* = 1, 2, . . . , 9 on both sides of the main diagonal. With respect to the main diagonal the symmetric elements are *p* (*k*) *ji* <sup>=</sup> <sup>1</sup> *p* (*k*) .

*ij* Step 3. A sequence of random numbers *ξ<sup>r</sup>* (*r* = 1) uniformly distributed within the interval [0, 1] is selected using the method of statistical simulation (Monte Carlo). The random evaluation *p*ˆ (*k*) *ij* by the *k*-th expert with the *q*-th inconsistency is calculated; this belongs to the interval [*p* (*k*) *ij* – *p* (*k*) *ij q* <sup>100</sup> , *p* (*k*) *ij* + *p* (*k*) *ij q* <sup>100</sup> ] : *p*ˆ (*k*) *ij* = *p* (*k*) *ij* – *p* (*k*) *ij q* <sup>100</sup> + 2*p* (*k*) *ij q* <sup>100</sup> *ξ<sup>r</sup>* ∈ [*p* (*k*) *ij* – *p* (*k*) *ij q* <sup>100</sup> , *p* (*k*) *ij* + *p* (*k*) *ij q* <sup>100</sup> ].

A new random number from the sequence *ξ<sup>r</sup>* is used for each element *p* (*k*) *ij* of the matrix.

Step 4. A random pairwise comparison matrix *P* is formed from the simulated elements *P* = k *p*ˆ (*k*) *ij* k. The consistency of the evaluations (*CR* < 0.1) is verified. If the value of the consistency ratio for the evaluations is *CR* ≥ 0.1, the matrix is discarded. The criteria weights Ω = (*ω* (*r*) *j* ), (*r* = 1) are calculated.

Step 5. New sequences of random numbers *ξ<sup>r</sup>* (*r* = 2, 3, . . . , *T*) are selected, where T is the number of repetitions (simulations). Steps 3 and 4 are repeated. The criteria weights (*ω* (*r*) *j* ), (*r* = 2, 3, . . . , *T*) are calculated.

Step 6. The largest values of the relative errors of the criteria weights for every *j*-th criterion of the AHP method are calculated for every simulation ξ: *δ* (*ξ*) *j* = *ω* (*ξ*) *<sup>j</sup>* −*ω<sup>j</sup> ωj* .

Step 7. The largest values of the relative errors *δ* (*ξ*) *j* of the criterion weight values for all the criteria are calculated for every simulation ξ: *δ <sup>ξ</sup>* = max *j δ* (*ξ*) *j* .

Step 8. The largest value of the relative errors *δ <sup>ξ</sup>* of the criteria weights for all the simulations ξ is taken as the AHP method error for the given matrix of comparison: δ = max *ξ δ ξ* .
