*3.2. Methodology*

#### 3.2.1. Fuzzy Set Theory

Fuzzy set was proposed by Zadeh to solve problems existing in uncertain environments. Fuzzy sets are functions that show the dependence degree of one fuzzy number on a set number. A tilde (~) is placed above any symbol representing a fuzzy set number. If *A* is a TFN, each value of the membership function is between [0, 1] and can be explained, as shown in Equation (1):

$$\mu\_{\vec{A}}(\mathbf{x}) = \begin{cases} \frac{(\mathbf{x} - l)}{(m - l)} & l \le \mathbf{x} \le m \\\frac{(\mathbf{u} - \mathbf{x})}{(\mathbf{u} - \mathbf{m})} & m \le \mathbf{x} \le \mathbf{u} \\\ 0 & 0. \mathcal{W} \end{cases} \tag{1}$$

Each degree of membership includes a left- and right-side representation of a TFN, as shown here:

$$
\widetilde{N} = (\mathbf{N}^{1(\mathbf{y})}, \mathbf{N}^{r(\mathbf{y})}) \\
= (1 - (m - l)\mathbf{y}, u + (m - u)\mathbf{y}), \mathbf{y} \in [0, 1].
$$

A TFN is shown in Figure 2.

#### 3.2.2. Fuzzy Analytic Network Process

ANP does not require a strict hierarchical structure, such as AHP. It allows elements to control, and be controlled, by different levels or clusters of attributes. Several control elements are also present at the same level. Interdependence between factors and their level is defined as a systematic approach to feedback or interactions between elements.

During the ANP process, the elements will be compared pairwise using the expert rating scale, from which the weighting matrix is established. The weights are then adjusted by defining the product of the super matrix.

The AHP method provides a structured framework to set priorities for each level of the hierarchy by using pairwise comparisons quantitated with a priority scale of 1–9, as shown. In contrast, the ANP approach allows for more complex relationships between the elements and their ranks. The 1–9 scale for AHP is shown in Table 2.


**Table 2.** The 1–9 scale for AHP [6].

It is clear that the disadvantage of ANP in dealing with the impression and objectiveness in the pairwise comparison process has been improved in the fuzzy analytic network process. The FANP applies a range of values to incorporate the decision-makers' uncertainly [38], whereas the ANP model shows a crisp value. The author assigns the fuzzy conversion scale of this formula, which will be used in the Saaty [72] fuzzy prioritization approach, as shown in Table 2, where Oab = (Oxab, Ooab, Ovab) is a triangular fuzzy number with the core Ooab, the support [Oxab, Ovab], and the triangular fuzzy number, as shown in Figure 3.

The 1–9 fuzzy conversion scale is shown in Table 3:


**Table 3.** The 1–9 fuzzy conversion scale [72].

The reversed degree to Oab expressing the non-preference is also expressed by a triangular fuzzy number: (1/*Ovab*, 1/*Ooab*, 1/*Oxab*. ). By the way, the weights of criteria from the fuzzy Saaty's matrix can be divided into four steps [73]:

1. Fuzzy synthetic extension calculation will transformed into TNT, called fuzzy synthetic extensions *Ka*(*kxa* , *koa*, *kuva* ). using Equations (2)–(4) [74]:

$$K\_d = \sum\_{b=1}^{n} \mathcal{O}\_{db} \bigotimes \left(\sum\_{a=1}^{n} \sum\_{b=1}^{n} \mathcal{O}\_{ab}\right)^{-1} \tag{2}$$

$$\sum\_{j=1}^{n} O\_{ab} = \left(\sum\_{b=1}^{n} M\_{ab\prime}^{x} \sum\_{j=1}^{n} O\_{ab\prime}^{o} \sum\_{b=1}^{n} O\_{ab}^{v} \right) \tag{3}$$

$$O\_{ab}^{-1} = 1/O\_{ab\prime}^{v} \, 1/O\_{ab\prime}^{o} \, 1/O\_{ab}^{x} \tag{4}$$

$$\mathbf{O} \bigotimes \mathbf{N} = (\mathbf{O}\_{\mathbf{x}} \ . \ N\_{\mathbf{x} \prime} \ \ \ \ \mathbf{O}\_{0} \ \ \ \ \mathbf{N}\_{0 \prime} \ \ \ \ \ \mathbf{O}\_{v} \ \ \ \mathbf{N}\_{v}) \tag{5}$$

Assign *a* = 1, 2, ... , *n*, in which a and b specifically are triangular fuzzy number (Ox, Oo, Ov) and (*Nx*, *N*0, *Nv*).

2. Weights of criteria are addressed by using relations of the fuzzy-valued. In this step, fuzzy synthetic extensions are blurred by using the min fuzzy extension of the valued relation ≤ given by Equation (5), and weights Wi are calculated (for more detail, see [75]):

$$\mathbf{Q\_{a}} = \min\_{b} \left\{ \frac{k\_{b}^{b} - k\_{a}^{v}}{(k\_{a}^{o} - k\_{a}^{v}) - (k\_{b}^{o} - k\_{b}^{x})} \right\} \tag{6}$$

For *a*, *b* = 1, 2, . . . ., *n*.

3. The standardization of the weights. If we expect to obtain the sum of weights within one matrix equal to 1, final weights wi are solved using Equation (7):

$$\eta\_i = \mathbb{Q}\_i / \sum\_{a=1}^{n} \mathbb{Q}\_a \tag{7}$$

For *a*, *b* = 1, 2, . . . , *n*.

4. An assessment of a Saaty's matrix consistency. In the line with [74], a consistency of the matrix is sufficient if inequality from Equation (8) holds:

$$RT = \frac{CT}{RR} = \frac{\overline{\lambda} - n}{(n - 1).RR} \le 0.1\tag{8}$$

where *λ* is a symbol for the arithmetic mean of the maximum real eigenvalues of the matrices (*aξab*)<sup>1</sup>≤*<sup>a</sup>*,*b*≤*n*, *ξ* ∈ {*<sup>x</sup>*, *o*, *v*} for *a*, *b* = 1, 2, ..., *n* is the size of the Saaty's matrix, and RR represents a random index whose value depends on [74].

#### *3.3. Data Envelopment Analysis*

3.3.1. Charnes-Cooper-Rhodes Model (CCR Model)

Charnes, Cooper, and Rhodes (1978) [30] proposed a basic DEA model, called the CCR model:

$$\begin{array}{c} \max\_{f, \emptyset} \mathbf{y} = \frac{f^V y\_0}{\mathbf{g}^V \mathbf{x}\_0} \\\\ f^V y\_b - \mathbf{g}^V \mathbf{x}\_b \le 0, b = 1, 2, \dots, n \\\ f \ge 0 \\\ g \ge 0 \end{array} \tag{9}$$

Due to constraints, the optimal value γ\* is a maximum of 1.

DMU0 is efficient if *γ*∗ = 1 and have at least one optimal *f*\* > 0 and *g*\* > 0. In addition, the fractional program can be presented as follows [76]:

$$\begin{array}{c} \min\_{\mathbf{g}, \mathbf{f}} \gamma = \mathbf{g}^{\mathbf{y}} y\_0 \\ \text{St.} \\ \mathbf{g}^{\mathbf{y}} \mathbf{x}\_0 - 1 = 0 \\ f^{\mathbf{y}} y\_j - \mathbf{g}^{\mathbf{y}} \mathbf{x}\_j \le 0, \ j = 1, 2, \dots, n \\ \mathbf{g} \ge 0 \\ f \ge 0 \end{array} \tag{10}$$

The Farrell [77] model of Equation (10) with variable *γ* and a nonnegative vector *β* = *β*1, *β*2, *β*3,..., *βn* is expressed as [76].

$$\begin{aligned} \max & \sum\_{i=1}^{m} d\_i^- + \sum\_{r=1}^{s} d\_r^+ \\ \text{S.t} \\ & \sum\_{j=1}^n \mathbf{x}\_{ij} \boldsymbol{\beta}\_j + d\_i^- = \gamma \mathbf{x}\_{i0}, \ i = 1, \ 2, \dots, p \\ & \sum\_{j=1}^n y\_{rj} \boldsymbol{\beta}\_j - d\_r^+ = y\_{r0}, \ r = 1, \ 2, \dots, q \\ & \boldsymbol{\beta}\_j \ge 0, \ j = 1, \ 2, \dots, n \\ & d\_i^- \ge 0, i = 1, 2, \dots, p \\ & d\_r^+ \ge 0, r = 1, 2, \dots, q \end{aligned} \tag{11}$$

Equation (11) has a feasible solution, *γ*∗ = 1, *β*∗0 = 1, *β*∗*j* = 0,(*j* = <sup>0</sup>), which effects the optimal value *γ*∗ not greater than 1. The process will be repeated for each DMUj, *j* = 1, 2, ... , *n*. DMUs are inefficient when *γ*∗ < 1, while DMUs are boundary points if *γ*∗ = 1. We avoid the weakly efficient frontier point by invoking a linear program as follows [76]:

$$\begin{aligned} \max & \sum\_{i=1}^{m} d\_i^- + \sum\_{r=1}^{d} d\_r^+ \\ \text{S.t} \\ & \sum\_{j=1}^n x\_{ij} \beta\_j + d\_i^- = \gamma x\_{i0\prime} \ i = 1, 2, \dots, p \\ & \sum\_{j=1}^n y\_{rj} \beta\_j - d\_r^+ = y\_{r0\prime} \ r = 1, 2, \dots, q \\ & \beta\_j \ge 0, \ j = 1, 2, \dots, n \\ & d\_i^- \ge 0, i = 1, 2, \dots, p \\ & d\_r^+ \ge 0, r = 1, 2, \dots, q \end{aligned} \tag{12}$$

In this case, note that the choices the *d*−*i* and *d*+*r* do not affect the optimal *γ*<sup>∗</sup>. The performance of DMU0 achieves 100% efficiency if, and only if, both (1) *γ*∗ = 1 and (2) *d*−∗ *i*= *d*+*r* = 0. The performance

of DMU0 is weakly efficient if, and only if, both (1) *γ*∗ = 1 and (2) *d*−∗ *i* = 0 and *d*+*r* = 0 for *i* or *r* in optimal alternatives. Thus, the preceding development amounts to solving the problem as follows [76]:

$$\begin{aligned} \min & \theta - \alpha \left( \sum\_{i=1}^{m} d\_i^- + \sum\_{r=1}^{d} d\_r^+ \right) \\ \text{S.t.} & \sum\_{\substack{j=1 \\ j \neq 1}}^n x\_{ij} \beta\_j + d\_i^- = \gamma x\_{i0}, \ i = 1, \ 2, \dots, p \\ & \sum\_{j=1}^n y\_{rj} \beta\_j - d\_r^+ = y\_{r0}, \ r = 1, \ 2, \dots, q \\ & \beta\_j \ge 0, \ j = 1, \ 2, \dots, n \\ & d\_i^- \ge 0, i = 1, 2, \dots, p \\ & d\_r^+ \ge 0, r = 1, 2, \dots, q \end{aligned} \tag{13}$$

In this case, *d*−*i* and *d*+*r* variables will be used to convert the inequalities into equivalent equations. This is similar to solving Equation (11) in two stages by first minimizing *γ* and then fixing *γ* = *γ*∗ as in Equation (12). This would reset the objective from max to min, as in Equation (9), to obtain [76]:

$$\max\_{\mathbf{g}, \mathbf{f}} \gamma = \max\_{f^V y\_j}^{y^V x\_0}$$
  $\text{S.t.} \begin{array}{l} \text{S.t.} \\ \text{g}^V x\_0 \le \text{g}^V y\_{j\text{s}}, j = 1, 2, \dots, n \\ \text{g} \ge \varepsilon > 0 \\ f \ge \varepsilon > 0 \end{array} \tag{14}$ 

If the α > 0 and the non-Archimedean element is defined, the input models are similar to Equations (10) and (13), as follows [76]:

> *m*

$$\begin{array}{c} \max\_{\mathbf{g}, \mathbf{f}} \gamma = \mathbf{g}^{V} \mathbf{x}\_{0} \\ \text{S.t} \\ \begin{array}{c} f^{V} \mathbf{y}\_{0} = 1 \\ \mathbf{g}^{V} \mathbf{x}\_{0} - f^{V} \mathbf{y}\_{j} \ge 0, \; j = 1, 2, \dots, n \\ \mathbf{g} \ge \varepsilon > 0 \\ f \ge \varepsilon > 0 \end{array} \end{array} \tag{15}$$

and:

$$\begin{aligned} \max & \phi - \varepsilon \left( \sum\_{i=1}^{m} d\_i^- + \sum\_{r=1}^{d} d\_r^+ \right) \\ & \sum\_{j=1}^{n} \mathbf{x}\_{ij} \beta\_j + d\_i^- = \mathbf{x}\_{i0}, \ i = 1, \ 2, \dots, p \\ & \sum\_{j=1}^{n} \mathbf{y}\_{rj} \beta\_j - d\_r^+ = \mathcal{Q} \mathbf{y}\_{r0}, \ r = 1, \ 2, \dots, q \\ & \beta\_j \ge 0, \ j = 1, \ 2, \dots, n \\ & d\_i^- \ge 0, i = 1, 2, \dots, p \\ & d\_r^+ \ge 0, r = 1, 2, \dots, q \end{aligned} \tag{16}$$

*d*

S.t The input-oriented CCR (CCR-I) has the dual multiplier model, expressed as [76]:

$$\begin{aligned} \max z &= \sum\_{r=1}^{q} \mathcal{g}\_{r} \mathcal{y}\_{r0} \\ \text{S.t} &\\ \sum\_{r=1}^{q} \mathcal{g}\_{r} \mathcal{y}\_{rj} - \sum\_{r=1}^{q} f\_{r} \mathcal{y}\_{rj} &\le 0 \\ & \sum\_{i=1}^{p} f\_{i} \mathcal{x}\_{i0} &= 1 \\ \mathcal{g}\_{r} f\_{i} &\ge \varepsilon > 0 \end{aligned} \tag{17}$$

The output-oriented CCR (CCR-O) has the dual multiplier model, expressed as [76]:

$$\begin{aligned} \min q &= \sum\_{i=1}^{p} f\_i \mathbf{x}\_{i0} \\ \text{S.t} \\ \sum\_{i=1}^{p} f\_i \mathbf{x}\_{ij} - \sum\_{r=1}^{q} \mathbf{g}\_r \mathbf{y}\_{rj} &\le 0 \\ \sum\_{r=1}^{q} \mathbf{g}\_r \mathbf{y}\_{r0} &= 1 \\ \mathbf{g}\_r / f\_i &\ge \varepsilon > 0 \end{aligned} \tag{18}$$

#### 3.3.2. Banker–Charnes–Cooper Model (BCC Model)

Banker et al. proposed the input-oriented BBC model (BCC-I) [30], which is able to assess the efficiency of DMU0 by solving the following linear program [76]:

$$\begin{aligned} \gamma\_B &= \min \gamma\\ \text{S.t.} \\ \sum\_{j=1}^n x\_{ij}\beta\_j + d\_i^- &= \gamma x\_{i0}, \; i = 1, \; 2, \dots, p\\ \sum\_{j=1}^n y\_{rj}\beta\_j - d\_r^+ &= y\_{r0}, \; r = 1, \; 2, \dots, q\\ \sum\_{k=1}^n \beta\_k &= 1\\ \beta\_k &\ge 0, k = 1, 2, \dots, n \end{aligned} \tag{19}$$

We avoid the weakly efficient frontier point by invoking the linear program as follows [76]:

$$\max \sum\_{i=1}^{m} d\_i^- + \sum\_{r=1}^{d} d\_r^+$$
 S.t < 
$$\sum\_{j=1}^n x\_{ij} \beta\_j + d\_i^- = \gamma x\_{i0}, \ i = 1, \ 2, \dots, p$$
  
$$\sum\_{j=1}^n y\_{rj} \beta\_j - d\_r^+ = y\_{r0}, \ r = 1, \ 2, \dots, q \tag{20}$$
 
$$\sum\_{k=1}^n \beta\_k = 1$$
  $\beta\_k \ge 0, \ k = 1, \ 2, \dots, n$   $d\_i^- \ge 0, i = 1, 2, \dots, p$   $d\_r^+ \ge 0, r = 1, 2, \dots, q$ 

Therefore, this is the first multiplier form to solve the problem as follows [76]:

$$\begin{aligned} \min & \gamma - \varepsilon \left( \sum\_{i=1}^{n} d\_i^- + \sum\_{r=1}^{d} d\_r^+ \right) \\ \text{S.t} \\ & \sum\_{j=1}^{n} x\_{ij} \beta\_j + d\_i^- = \gamma x\_{i0}, \ i = 1, \ 2, \dots, p \\ & \sum\_{j=1}^{n} y\_{rj} \beta\_j - d\_r^+ = y\_{r0}, \ r = 1, \ 2, \dots, q \\ & \sum\_{k=1}^{n} \beta\_k = 1 \\ & \beta\_k \ge 0, \ k = 1, \ 2, \dots, n \\ & d\_i^- \ge 0, i = 1, \ 2, \dots, p \\ & d\_r^+ \ge 0, r = 1, 2, \dots, q \end{aligned} \tag{21}$$

The linear program in Equation (17) gives us the second multiplier form, which is expressed as [76]:

$$\begin{array}{c} \max\_{\mathbf{g}, f, f\_0} \gamma\_{\mathbf{B}} = f^V y\_0 - f\_0 \\ \text{S.t} \\ f^V y\_j - \mathbf{g}^V \mathbf{x}\_j - f\_0 \le \mathbf{0}, \; j = 1, \; 2, \dots, n \\ \mathbf{g} \ge \mathbf{0} \\ f \ge \mathbf{0} \end{array} \tag{22}$$

If *g* and *f*, which are mentioned in Equation (22), are vectors, the scalar *v*0 may be positive or negative (or zero). Thus, the equivalent BCC fractional program is obtained from the dual program in Equation (22) as [76]:

$$\begin{array}{c} \max\_{\mathbf{g}, f} \gamma = \frac{f^V y\_0 - f\_0}{\mathbf{g}^V \mathbf{x}\_0} \\ \text{S.t.} \\ \frac{f^V y\_j - f\_0}{\mathbf{g}^V \mathbf{x}\_j} \le 1, \ j = 1, 2, \dots, n \\ \mathbf{g} \ge 0 \\ f \ge 0 \end{array} \tag{23}$$

The DMU0 can be called BCC-efficient if an optimal solution (*γ*<sup>∗</sup>*B*, *d*−∗, *d*+∗) is claimed in this two-phase process for Equation (17) satisfies *γ*∗*B* = 1 and has no slack *d*−∗ = *d*+∗ = 0, then. The improved activity (*γ*<sup>∗</sup>*x* − *d*−∗, *y* + *d*+∗) also can be illustrated as BCC-efficient [76].

The output-oriented BCC model (BCC–O) is:

S.t

$$\max \eta$$

$$\sum\_{j=1}^{n} \mathbf{x}\_{ij}\boldsymbol{\beta}\_{j} + d\_{i}^{-} = \gamma \mathbf{x}\_{i0}, \ i = 1, \ 2, \dots, p$$

$$\sum\_{j=1}^{n} \mathbf{y}\_{rj}\boldsymbol{\beta}\_{j} - d\_{r}^{+} = \eta \mathbf{y}\_{r0}, \ r = 1, \ 2, \dots, q$$

$$\sum\_{k=1}^{n} \boldsymbol{\beta}\_{k} = 1$$

$$\boldsymbol{\beta}\_{k} \ge 0, \ k = 1, \ 2, \dots, n$$

From Equation (24), we have the associate multiplier form, which is expressed as [76]:

$$\begin{array}{c} \min\_{\mathbf{g}, f, \mathbf{g}\_0} f^V y\_0 - f\_0 \\ \text{S.t} \\ f^V y\_0 = 1 \\ \mathbf{g}^V x\_j - f^V y\_j - f\_0 \le 0, \; j = 1, 2, \dots, n \\ \mathbf{g} \ge 0 \\ f \ge 0 \end{array} \tag{25}$$

*f* 0 is the scalar associated with ∑*nk*=<sup>1</sup> *βk* = 1. In conclusion, the authors achieve the equivalent (BCC) fractional programming formulation for Equation (25) [76]:

$$\begin{array}{c} \min\_{\substack{\mathcal{S}, f, \mathcal{S}^0 \\ f^V y\_0}} \frac{\mathcal{S}^V x\_0 - f\_0}{f^V y\_0} \\\\ \frac{f^V x\_j - f\_0}{f^V y\_j} \le 1, \; j = 1, \; 2, \dots, n \\\\ \mathcal{g} \ge 0 \\ f \ge 0 \end{array} \tag{26}$$

3.3.3. Slacks-Based Measure Model (SBM Model)

> The SBM model was introduced by Tone [78] (see also Pastor et al. [79]).
