**2. General Concepts**

In this section, general concepts related to linguistic variables, HFSs, HFPRs and HFLSs are recalled.

## *2.1. Linguistic Variables*

Assume *lvi* stands for a possible linguistic value in a finite and entirely ordered separate term set *LV* = {*lvi* |*i* = <sup>−</sup>*t*,..., −1, 0, 1, . . . , *t*} [33,34]. It is usually required to meet the following conditions:


When the aggregated information is used in the process of decision making, it usually does not go with the values in the predefined evaluation scope. To reserve all the obtained values, Xu [33] changed the preceding term set *LV* into a continuous one *LV* = {*lvi*|*i* ∈ [−*p*, *p*]}, where *p*(*p* > *t*) is a adequately grea<sup>t</sup> positive integer.

Taking two linguistic terms *lvi*, *lvj* ∈ *LV* into account, some operations are proposed in the following:


#### *2.2. Hesitant Fuzzy Sets*

Since Zadeh [35] proposed fuzzy sets, it has been widely applied in various fields [36–40] and many extensions based on fuzzy set have been developed [41,42]. HFSs, as extensions of fuzzy sets, were firstly presented by Torra [32]. They are defined in coping with several numerical values permitted to indicate an element's membership degree [43–45]. The definition of HFSs is given as follows.

**Definition 1** [32]**.** *If X is a fixed set, then a hesitant fuzzy set (HFS) on X is in relation to the function, which can go back a set of numbers between zero and one. It is described as the mathematical sign in the following:*

$$F = \{ <\ge x, h\_F(x) > |x \in X \}\tag{1}$$

*where hF*(*x*) *is a subset of several values between zero and one, which represents the probable membership degrees of an element x* ∈ *X to a certain set F. Xia and Xu [46] believe that it is convenient to call hF*(*x*) *a hesitant fuzzy element (HFE)*.

Preference relations are impactful tools in respect to modeling the decision making process. On the basis of HFSs, Zhu [47] came up with the concept of hesitant fuzzy preference relations (HFPRs), which is given as follows.

**Definition 2** [47]**.** *Let X* = {*<sup>x</sup>*1, *x*2,..., *xn*} *be a reference set, then a HFPR G on X is denoted by a matrix G* = (*gij*)*n*×*n* ⊂ *X* × *X, where gij* = {[*q<sup>σ</sup>*(*l*) *ij* |*l* = 1, . . . , |*lij*|]} *is a HFE expressing whole possible preference degree(s) of the object xi over xj. Furthermore, gij* (*i*, *j* = 1, 2, ... , *n*; *i* < *j*) *should meet the following requirements:*

$$q\_{ij}^{\sigma(l)} + q\_{ji}^{\sigma(l)} = 1,\\ q\_{ii}^{\sigma(l)} = 0.5, |l\_{ij}| = |l\_{ji}| \tag{2}$$

$$\left|q\_{ij}^{\sigma(l)}\right| < \left|q\_{ij}^{\sigma(l+1)}\right|, q\_{ji}^{\sigma(l+1)} < \left|q\_{ji}^{\sigma(l)}\right|\tag{3}$$

*where q<sup>σ</sup>*(*l*) *ij is the l-th largest element in gij, and* |*lij*| *is the number of elements in gij*.

#### *2.3. Hesitant Fuzzy Linguistic Sets*

The concept, operational laws and comparison method of HFLNs are recalled in this section. Moreover, the limitations of them are discussed in the corresponding places.

**Definition 3** [31]**.** *Let X* = {*<sup>x</sup>*1, *x*2,..., *xn*} *be a fixed set, and lvθ*(*x*) ∈ *LV. Then, the hesitant fuzzy linguistic set (HFLS) U in X can be described as the subsequent object:*

$$\mathcal{U}I = \left\{ <\ge, lv\_{\theta(x)}, l\mu(x)>|x\in X\right\} \tag{4}$$

*where hU*(*x*) *is a set of finite numbers in [0,1] and signifies the possible degrees of membership that x belongs to lvθ*(*x*).

There are two special cases of HFLNs: (1) A hesitant fuzzy linguistic number (HFLN): There is only one element in the set *X* = {*<sup>x</sup>*1, *x*2,..., *xn*}, and HFLS *U* is reduced to < *lvθ*(*x*), *hU*(*x*) >; (2) A fuzzy linguistic number: There is only one element in *hB*(*x*), like *hU*(*x*) = {*u*}, and HFLS *U* is reduced to < *lvθ*(*x*), *u* >. For example, < *lv*3, 0.5 > shows that the membership degree of *x* belongs to *lv*3 is 0.5.

The operational laws about HFLNs are introduced in literature [31] as follows. Based on them, many aggregation operators are also presented in this paper.

**Definition 4** [31]**.** *Given two HFLNs a* =< *lvθ*(*a*), *ha* > *and b* =< *lvθ*(*b*), *hb* > *arbitrarily, and λ* ∈ [0, 1] *, then*

$$(1)\quad a \oplus\_{\operatorname{Lin}} b = \precl \operatorname{lv}\_{\theta(a) + \theta(b)'} \cup\_{r\_1 \in h\_a, r\_2 \in h\_b} \{r\_1 + r\_2 - r\_1 \cdot r\_2\} > \varphi$$

$$(2)\quad \lambda a = \prec l v\_{\lambda \cdot \theta(a)'} \downarrow\_{r \in \mathcal{l}\_x} \{1 - (1 - r)^{\lambda}\} >.$$

It is clear that the operations mentioned above are not very reasonable as the linguistic values and the membership degrees are operated separately. In fact, the membership degrees should be related to the homologous linguistic values in the operation process.

**Definition 5** [29]**.** *If a* =< *lvθ*(*a*), *ha* > *is a HFLN, then the score function <sup>E</sup>*(*a*) *of a can be described as follows:*

$$E(a) = s(h\_4) \times f^\*(lv\_{\theta(a)}) \tag{5}$$

*where s*(*ha*) = 1#*ha* ∑*r*<sup>∈</sup>*ha r, s*(*ha*) *is the score function of ha,* #*ha is the number of values in ha, f* ∗(*lvi*) = 12 + *i*2*t is one of the three different expressions of the linguistic scale function defined by Wang et al. [29], and it can be replaced by another expressions under different semantics. For more details please refer to literature [29]*.

**Definition 6** [29]**.** *Let a* =< *lvθ*(*a*), *ha* >=< *lvθ*(*a*), <sup>∪</sup>*r*<sup>∈</sup>*ha*{*r*} > *be a HFLN, and the variance function is represented as V*∗(*ha*) = 1#*ha* ∑*r*<sup>∈</sup>*ha* [*r* − *<sup>s</sup>*(*ha*)]<sup>2</sup>*. Hence, the accuracy function <sup>V</sup>*(*a*) *of a can be shown as follows:*

$$V(a) = f^\*(lv\_{\theta(a)}) \cdot \left[1 - V^\*(h\_a)\right] \tag{6}$$

*where* #*hα is the number of the values in ha*.

The accuracy function *<sup>V</sup>*(*α*) is analogous to the sample variance statistically and can display the fluctuation of assessment values of *ha*. The greater the volatility is, the larger the hesitation will be. Then, the ranking order of HFLNs can be derived by using the score function and accuracy function as follows.

**Definition 7** [29]**.** *If a* =< *lvθ*(*a*), *ha* > *and b* =< *lvθ*(*b*), *hb* > *are two arbitrary HFLNs, r<sup>σ</sup>*(*l*) *a and r<sup>σ</sup>*(*l*) *b are regarded as the lth number in ha and hb respectively, and all membership degrees are arranged in ascending order. Then the comparison method is*


**Example 1.** *Suppose a* =< *lv*0, {0.1, 0.4} >*, b* =< *lv*−3, {0.1, 0.4} > *and c* =< *lv*0, {0.2, 0.3} > *are three HFLNs. Let f* ∗(*lvi*) = 12 + *i*2*t and t* = 3*, then*:

*(1) lvθ*(*b*) = *lv*−<sup>3</sup> < *lvθ*(*a*) = *lv*0*, r<sup>σ</sup>*(1) *b*= *r<sup>σ</sup>*(1) *a* = 0.1*, r<sup>σ</sup>*(2) *b*= *r<sup>σ</sup>*(2) *a* = 0.4*, thus b* < *a*;


There is no doubt that the amounts of calculations are increased when the score function or even the accuracy function needs to be calculated. Besides, according to this comparison method, if *<sup>E</sup>*(*a*) = *E*(*b*) and *<sup>V</sup>*(*a*) = *V*(*b*) are true simultaneously, a conclusion is that *a* = *b*. It is reasonable in most conditions. However, it is not well tenable when the linguistic scale function *f* ∗(*lvi*) = 0 and the possible memberships in a certain HFLN are not strictly superior to the memberships in another HFLN. For instance, assume *α* =< *lv*−3, {0.1, 0.7} >, *β* =< *lv*−3, {0.1, 0.9} > and *η* =< *lv*−3, {0.5, 0.6} > are three HFLNs. Let *f* ∗(*lvi*) = 12 + *i*2*t* and *t* = 3, then *<sup>E</sup>*(*α*) = *E*(*β*) = *<sup>E</sup>*(*η*) = 0 and *<sup>V</sup>*(*α*) = *V*(*β*) = *<sup>V</sup>*(*η*) = 0 are true, a decision is that *α* < *β* according to part (1) of the comparison method, and a decision is that *α* = *η* and *β* = *η* according to part (3) of this method. It is clear that these conclusions are self-contradictory and counterintuitive.

#### **3. New Operations and Comparison Method**

As mentioned in Section 2.3, there are some weaknesses in the existent operational laws and comparison method with HFLNs. Thus, new operations and comparison methods are presented in this section.

#### *3.1. New Operational Laws and Aggregation Operators*

To overcome the limitations of operations proposed in Section 2.3, some new operational laws on the HFLNs are raised in this section. Afterwards, the hesitant fuzzy linguistic weighted average (HFLWA) operator and hesitant fuzzy linguistic average (HFLA) operator based on them are presented.

**Definition 8.** *If a* =< *lvθ*(*a*), *ha* > *and b* =< *lvθ*(*b*), *hb* > *are HFLNs, and λ* ∈ [0, 1], *then*

$$(1)\quad a \oplus b = \leqslant \ln \nu\_{\theta(a) + \theta(b)} \cup\_{r\_1 \in h\_{a,r\_2} \in h\_b} \left\{ \frac{(\theta(a) + t) \cdot r\_1 + (\theta(b) + t) \cdot r\_2}{(\theta(a) + t) + (\theta(b) + t)} \right\} \; \geqslant \#$$

*(2) λa* =< *lvλ*·*<sup>θ</sup>*(*a*), *ha* >.

It is easily verified that all operational results mentioned above are still HFLNs. Although there are no practical meanings with the operational results, the basic operations are necessary to be defined in practice. When these operations are used together, the actual significance can be reflected in reality. InviewofDefinition8,theequivalentrelationscanbefurtheracquiredasfollows.


(4) Distributivity: *λ*1*<sup>a</sup>* ⊕ *λ*2*<sup>a</sup>* = (*<sup>λ</sup>*1 + *<sup>λ</sup>*2)*<sup>a</sup>*, *λ*1, *λ*2 ∈ [0, 1].

**Definition 9.** *Let ai* =< *lvθ*(*ai*), *hai* > *be a group of HFLNs with i* = 1, 2, ... , *n*. *The HFLWA operator can be denoted as follows:*

$$\text{HFLNA}(a\_1, a\_2, \dots, a\_n) = \omega\_1 a\_1 \oplus \omega\_2 a\_2 \oplus \dots \oplus \omega\_n a\_n \tag{7}$$

*where ω* = (*<sup>ω</sup>*1, *ω*2,..., *<sup>ω</sup>n*)*<sup>T</sup> is the weight vector of ai* (*i* = 1, 2, . . . , *<sup>n</sup>*)*, ωi* ∈ [0, 1] *and* ∑*ni*=<sup>1</sup> *ωi* = 1.

Particularly, if *ω* = ( 1*n* , 1*n* ,..., 1*n* )*T*, then the HFLWA operator is degenerated to the HFLA operator as follows:

$$\text{HFLA}(a\_1, a\_2, \dots, a\_{\text{fl}}) = \frac{1}{n} (a\_1 \oplus a\_2 \oplus \dots \oplus a\_{\text{fl}}) \tag{8}$$

**Theorem 1.** *Assume ai* =< *lvθ*(*ai*), *hai* > *are a set of HFLNs, and ω* = (*<sup>ω</sup>*1, *ω*2,..., *<sup>ω</sup>n*)*<sup>T</sup> is the weight vector of ai* (*i* = 1, 2, ... , *<sup>n</sup>*)*, ωi* ∈ [0, 1] *and* ∑*ni*=<sup>1</sup> *ωi* = 1, *then the aggregated result through applying the HFLWA operator is still a HFLN, and*

$$\text{HFLNA}(a\_1, a\_2, \dots, a\_n) = < \operatorname{lv}\_{\sum\_{i=1}^n C\_i}^{\tau} \, ^{\prime}\_{r\_1 \in h\_{a\_1}, r\_2 \in h\_{a\_2}, \dots, r\_n \in h\_{an}} \left\{ \frac{\sum\_{i=1}^n D\_i \cdot r\_i}{\sum\_{i=1}^n D\_i} \right\} > \tag{9}$$

*where Ci* = *<sup>ω</sup>i<sup>θ</sup>*(*ai*) *and Di* = *<sup>ω</sup>i*(*θ*(*ai*) + *t*) *for all i* = 1, 2, . . . , *n*.

**Proof.** Clearly, by Definition 8, the aggregated data by exploiting the HFLWA operator remains a HFLN. Next, Equation (9) is proved through utilizing mathematical induction on *n*.

(1) When *n* = 2: we have *ω*1*a*1 =< *lvC*1 , *ha*1 > and *ω*2*a*2 =< *lvC*2 , *ha*2 >, then *HFLWA*(*<sup>a</sup>*1, *<sup>a</sup>*2) =

$$\begin{aligned} \omega\_{1}a\_{1} \oplus \omega\_{2}a\_{2} &=  = . \end{aligned}$$

(2) For *n* = *k*: If Equation (9) holds, then *HFLA*(*<sup>a</sup>*1, *a*2, ... , *ak*) =< *lv k* ∑ *i*=1 *Ci* , ∪ *<sup>r</sup>*1<sup>∈</sup>*ha*1 ,*r*2<sup>∈</sup>*ha*2 ,...,*rk*<sup>∈</sup>*hak* ⎧⎪⎨⎪⎩ *k*∑*i*=1 *Di*·*ri k*∑*i*=1 *Di* ⎫⎪⎬⎪⎭ >. Hence, for *n* = *k* + 1, from Definition *k*∑*i*=1 *Di*·*ri*

$$\begin{array}{rclcrcl}\texttt{8,} & \texttt{that} & \texttt{is} & \texttt{HFLA}(a\_{1}, a\_{2}, \dots, a\_{k+1}) & = & < & l\upsilon\_{k} & \underset{\sum\_{i=1}^{k} C\_{i}}{\displaystyle\sum\_{i=1}^{k} r\_{1} \in h\_{1}, r\_{2} \in h\_{2}, \dots, r\_{k} \in h\_{k}} & \left\{ \begin{array}{rcl} \sum\_{i=1}^{k} D\_{i} \tau\_{i} \\ \sum\_{i=1}^{k} D\_{i} \\ \sum\_{i=1}^{k} D\_{i} \end{array} \right. \\ \end{array}$$

<sup>⊕</sup>(*<sup>ω</sup>k*+<sup>1</sup> · *ak*+<sup>1</sup>), = < *lv k* ∑ *i*=1 *Ci*+*Ck*+<sup>1</sup> , ∪ *<sup>r</sup>*1<sup>∈</sup>*ha*1 ,*r*2<sup>∈</sup>*ha*2 ,...,*rk*+1<sup>∈</sup>*hak*+<sup>1</sup> ⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ *k*∑*i*=1 *Di*· *k*∑*i*=1 *Di*·*ri k*∑*i*=1 *Di* <sup>+</sup>*Dk*+1·*rk*+<sup>1</sup> *k*∑*i*=1 *Di*+*Dk*+<sup>1</sup> ⎫⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎭ > = < *lvk*+<sup>1</sup> ∑ *i*=1 *Ci* , ∪ *<sup>r</sup>*1<sup>∈</sup>*ha*1 ,*r*2<sup>∈</sup>*ha*2 ,...,*rk*+1<sup>∈</sup>*hak*+<sup>1</sup> ⎧⎪⎨⎪⎩*k*+1 ∑*i*=1 *Di*·*ri k*+1 ∑*i*=1 *Di* ⎫⎪⎬⎪⎭ >.

i.e., for *n* = *k* + 1, Equation (9) follows.

Therefore, combined (1) with (2), Equation (9) follows for all *n* ∈ *N*, then the proof of Theorem 1 is completed. -

#### *3.2. Likelihood of Hesitant Fuzzy Linguistic Numbers*

The likelihood-based comparison method is an effective way to compare fuzzy numbers. Inspired by literature [48,49], a new method based on likelihood to compare HFLNs is proposed. From an example, it can be seen that the limitations of the comparison method mentioned in Section 2.3 have been overcome when the proposed likelihood-based comparison method is adopted.

The likelihood between two HFLNs is described in the following:

**Definition 10.** *If a* =< *lvθ*(*a*), *ha* > *and b* =< *lvθ*(*b*), *hb* > *are two optional HFLNs, then the likelihood between a and b can be demonstrated as follows:*

$$L(a \ge b) = \begin{cases} 1, & \text{lv}\_{\theta(a)} > l\_{\upsilon\_{\theta(b)}}, h\_a^+ > h\_b^- \\ \frac{1}{\theta h\_a \theta h\_b} \sum\_{i=1}^{\#h\_a} \sum\_{\substack{r' \in ( -r'\_a) \\ r \le \theta(a)}}^{\#h\_a} \frac{r'\_a^{r(i)}}{r'\_a + r\_b^{r(i)}}, & lv\_{\theta(a)} = lv\_{\theta(\beta)} \\ \frac{1}{\theta h\_a \theta h\_b} \sum\_{i=1}^{\#h\_b} \sum\_{\substack{j=1 \ j \in \ \sigma^{(i)}\_a \\ r' \in (\ln\_{\theta(a)})^\*, r'^{(i)}\_a + f^\*(\ln\_{\theta(b)}) \cdot r\_b^{r(i)}}}{f^\*(\ln\_{\theta(a)}) \cdot r\_a^{r(i)} + f^\*(\ln\_{\theta(b)}) \cdot r\_b^{r(i)}}, & lv\_{\theta(a)} \ne lv\_{\theta(b)} \\ 1, & lv\_{\theta(a)} < lv\_{\theta(a)}, h\_a^- < h\_b^+ \end{cases} \tag{10}$$

*where γ<sup>σ</sup>*(*i*) *a and γ<sup>σ</sup>*(*j*) *b are the i-th and j-th largest value,* #*ha and* #*hb are the numbers of element in ha and hb respectively.*

**Property 1.** *Suppose* Ω *is a set with all HFLNs,* ∀*<sup>a</sup>*,*b*,*<sup>c</sup>* ∈ Ω*, the likelihood satisfies the following properties:*

*(1)* 0 ≤ *<sup>L</sup>*(*a* ≥ *b*) ≤ 1;


*(6) If <sup>L</sup>*(*a* ≥ *c*) ≥ 0.5*, and <sup>L</sup>*(*c* ≥ *b*) ≥ 0.5*, then <sup>L</sup>*(*a* ≥ *b*) ≥ 0.5.

**Proof.** We only prove (4) of Property 1 in the paper, as the other properties can be easily proven.

(1) If *lvθ*(*a*) < *lvθ*(*b*), *<sup>h</sup>*+*a* < *h*−*b* or *lvθ*(*a*) > *lvθ*(*b*), *h*<sup>−</sup>*a* < *h*+*b* , according to Definition 10, it is true that *<sup>L</sup>*(*a* ≥ *b*) + *L*(*b* ≥ *a*) = 1.

(3)

(2) If *lvθ*(*a*) = *lvθ*(*b*), the following deduction can be derived: *<sup>L</sup>*(*a* ≥ *b*) = 1 #*ha*#*hb* #*ha* ∑ *i*=1 #*hb* ∑ *j*=1 *r σ*(*i*) *a r σ*(*i*) *a* <sup>+</sup>*r<sup>σ</sup>*(*j*) *b* and *<sup>L</sup>*(*a* ≤ *b*) = *L*(*b* ≥ *a*) = 1 #*ha*#*hb* #*ha* ∑ *i*=1 #*hb* ∑ *j*=1 *r σ*(*i*) *a r σ*(*i*) *a* <sup>+</sup>*r<sup>σ</sup>*(*j*) *b* , then *<sup>L</sup>*(*a* ≥ *b*) + *<sup>L</sup>*(*a* ≤ *b*)= 1 #*ha*#*hb* #*ha* ∑ *i*=1 #*hb* ∑ *j*=1 *r σ*(*i*) *a r σ*(*i*) *a* <sup>+</sup>*r<sup>σ</sup>*(*j*) *b* + 1 #*ha*#*hb*#*hb* ∑ #*ha* ∑ *i*=1*r σ*(*j*) *b rσ*(*i*) <sup>+</sup>*r<sup>σ</sup>*(*j*) = 1 #*ha*#*hb*#*hb* ∑ #*ha* ∑ *i*=1*r σ*(*i*) *a* <sup>+</sup>*r<sup>σ</sup>*(*j*) *b rσ*(*i*) <sup>+</sup>*r<sup>σ</sup>*(*j*) = 1.

*j*=1 *a b j*=1 *a b* If *lvθ*(*a*) = *lvθ*(*b*), similar to proof (2), we can obtain the following: *<sup>L</sup>*(*a* ≤ *b*) = *L*(*b* ≥ *a*) = 1 #*ha*#*hβ* #*hb* ∑ *j*=1 #*ha* ∑ *i*=1 *f* <sup>∗</sup>(*lvθ*(*b*))·*r<sup>σ</sup>*(*j*) *b f* <sup>∗</sup>(*lvθ*(*a*))·*r<sup>σ</sup>*(*i*) *a* +*f* <sup>∗</sup>(*lvθ*(*b*))·*r<sup>σ</sup>*(*j*) *b* , *<sup>L</sup>*(*a* ≥ *b*) + *<sup>L</sup>*(*a* ≤ *b*) = 1 #*ha*#*hβ* #*ha* ∑ *i*=1 #*hb* ∑ *j*=1 *f* <sup>∗</sup>(*lvθ*(*a*))·*r<sup>σ</sup>*(*j*) *a f* <sup>∗</sup>(*lvθ*(*a*))·*r<sup>σ</sup>*(*i*) *a* +*f* <sup>∗</sup>(*lvθ*(*b*))·*r<sup>σ</sup>*(*j*) *b* + 1 #*ha*#*hβ* #*hb* ∑ *j*=1 #*ha* ∑ *i*=1 *f* <sup>∗</sup>(*lvθ*(*b*))·*r<sup>σ</sup>*(*j*) *b f* <sup>∗</sup>(*lvθ*(*a*))·*r<sup>σ</sup>*(*i*) *a* +*f* <sup>∗</sup>(*lvθ*(*b*))·*r<sup>σ</sup>*(*j*) *b* = 1 #*ha*#*hβ* #*hb* ∑ *j*=1 #*ha* ∑ *i*=1 *f* <sup>∗</sup>(*lvθ*(*a*))·*r<sup>σ</sup>*(*i*) *a* +*f* <sup>∗</sup>(*lvθ*(*b*))·*r<sup>σ</sup>*(*j*) *b f* <sup>∗</sup>(*lvθ*(*a*))·*r<sup>σ</sup>*(*i*) *a* +*f* <sup>∗</sup>(*lvθ*(*b*))·*r<sup>σ</sup>*(*j*) *b* = 1. Therefore, *<sup>L</sup>*(*a* ≥ *b*) + *<sup>L</sup>*(*a* ≤ *b*) = 1.

Now, the proof is completed. -

**Definition 11.** *If a* =< *lvθ*(*a*), *ha* > *and b* =< *lvθ*(*b*), *hb* > *are two HFLNs. The new comparison method for HFLNs can be defined as follows:*


**Example 2.** *Suppose that three HFLNs are the same as Example 1, the comparison results with new proposed comparison method are given as follows.*


It is true that the results in Examples 1 and 2 are the same, which verifies the validity of the presented comparison method. Moreover, assume *α* =< *lv*−3, {0.1, 0.7} >, *β* =< *lv*−3, {0.1, 0.9} > and *η* =< *lv*−3, {0.5, 0.6} > are three HFLNs, *f* ∗(*lvi*) = 12 + *i*2*t* and *t* = 3, then *<sup>L</sup>*(*α* ≥ *β*) = 0.4781, *<sup>L</sup>*(*α* ≥ *η*) = 0.3578 and *L*(*β* ≥ *η*) = 0.3881. So we ge<sup>t</sup> a conclusion that *α* < *β* < *η*, which is more reasonable than the results obtained by using the previous comparison method.

#### **4. Decision Making Framework**

In this section, a decision making framework is proposed to handle decision making problems under a hesitant linguistic environment. Original preference information is expressed by HLPRs and the consistency level is checked and improved. Then, a likelihood-based model is suggested to derive a ranking from HLPRs with acceptable consistency.

#### *4.1. Original Preference Information*

When making evaluations for some alternatives under a hesitant linguistic environment, DMs can provide original preference information with HLPRs. To facilitate the following discussions, the concepts of HLPRs and consistent HLPRs are defined as follows.

**Definition 12.** *If X* = {*<sup>x</sup>*1, *x*2,..., *xn*} *is a set of alternatives, then the HLPR K on X can be described as a matrix K* = (*kij*)*n*×*n*⊂ *X* × *X. Each element kij* =< *lvij*,*rij* > *is a HFLN, where lvij and rij demonstrate*

*respectively, the degree of xi preferred to xj and the possible membership degrees that x belongs to lvij. Then, for kij* (*i*, *j* = 1, 2, . . . , *n*, *i* < *j*)*, the following requirements should be met:*

$$\operatorname{lv}\_{\bar{\operatorname{ij}}} \oplus \operatorname{lv}\_{\bar{\operatorname{ji}}} = \operatorname{lv}\_{0\prime} \operatorname{lv}\_{\bar{\operatorname{ii}}} = \operatorname{lv}\_{0\prime} r^{\sigma(l)}\_{\bar{\operatorname{ij}}} = r^{\sigma(l)}\_{\bar{\operatorname{ji}}} \; \; r^{\sigma(l)}\_{\bar{\operatorname{ii}}} = \mathbf{1} \; \; |k\_{\bar{\operatorname{ij}}}| = |k\_{\bar{\operatorname{ji}}}| \tag{11}$$

*where r<sup>σ</sup>*(*l*) *ij is the l-th element in rij, and* |*kij*| *is the number of values in kij*.

**Definition 13.** *Let K* = (*kij*)*n*×*nbe a HLPR, if*

$$r\_{ik}^{\sigma(l)} \cdot lv\_{ik} \oplus r\_{kj}^{\sigma(l)} \cdot lv\_{kj} = r\_{ij}^{\sigma(l)} \cdot lv\_{ij} \text{ ( $i, j, k = 1, 2, \dots$  n)}\tag{12}$$

*then K is a consistent HLPR.*

**Example 3.** *Given a HLPR K*1 = ⎡⎢⎣ < *lv*0, {1} > < *lv*1, {0.3, 0.9} > < *lv*−2, {0.1, 0.6} > < *lv*−1, {0.3, 0.9} > < *lv*0, {1} > < *lv*2, {0.4, 0.9} > < *lv*2, {0.1, 0.6} > < *lv*−2, {0.4, 0.9} > < *lv*0, {1} > ⎤⎥⎦*. Since r<sup>σ</sup>*(1) 13 · *lv*13 = *lv*−0.2*, r<sup>σ</sup>*(1) 12 · *lv*12 ⊕ *r<sup>σ</sup>*(1) 23 · *lv*23 = *lv*1.1*, r<sup>σ</sup>*(1) 13 · *lv*13 = *r<sup>σ</sup>*(1) 12 · *lv*12 ⊕ *r<sup>σ</sup>*(1) 23 · *lv*23*, then K*1 *isnotaconsistentHLPR.*

**Theorem 2.** *Assume a HLPR K* = (*kij*)*n*×*n, if*

$$\max \left\{ \oplus\_{k=1}^{n} (r\_{ik}^{\sigma(l)} \cdot lv\_{ik} \oplus r\_{kj}^{\sigma(l)} \cdot lv\_{kj}) \right\} < lv\_0 \text{ or } \min \left\{ \oplus\_{k=1}^{n} (r\_{ik}^{\sigma(l)} \cdot lv\_{ik} \oplus r\_{kj}^{\sigma(l)} \cdot lv\_{kj}) \right\} > lv\_0 \tag{13}$$

*then K* = (*kij*)*n*×*n has a corresponding consistent HLPR.*

**Proof.** The proof is straightforward. According to Equation (11), if min⊕*nk*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ik* · *lvik* ⊕ *r<sup>σ</sup>*(*l*) *kj* · *lvkj*) < *lv*0 and max⊕*nk*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ik* · *lvik* ⊕ *r<sup>σ</sup>*(*l*) *kj* · *lvkj*) > *lv*0, some calculated membership degrees will be less than zero. Clearly, it is unreasonable. Therefore, when max⊕*nk*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ik* · *lvik* ⊕ *r<sup>σ</sup>*(*l*) *kj* · *lvkj*) < *lv*0 or, min⊕*nk*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ik* · *lvik* ⊕ *r<sup>σ</sup>*(*l*) *kj* · *lvkj*) > *lv*0, the corresponding consistent HLPR of *K* = (*kij*)*n*×*n* exists. -

**Example 4.** *Given a HLPR K*2 = ⎡⎢⎣ < *lv*0, {1} > < *lv*1, {0.3, 0.5} > < *lv*−1, {0.1, 0.7} > < *lv*−1, {0.3, 0.5} > < *lv*0, {1} > < *lv*1, {0.4, 0.8} > < *lv*1, {0.1, 0.7} > < *lv*−1, {0.4, 0.8} > < *lv*0, {1} > ⎤⎥⎦*. Since*⊕3*k*=<sup>1</sup>(*r<sup>σ</sup>*(1) 1*k* · *lv*1*k* ⊕ *r<sup>σ</sup>*(1) *k*3 · *lvk*3) = *lv*0.5 > *lv*0 *and* ⊕ *nk*=<sup>1</sup>(*r<sup>σ</sup>*(2) 1*k* · *lv*1*k* ⊕ *r<sup>σ</sup>*(2) *k*3 · *lvk*3) = *lv*−0.1 < *lv*0*, then K*2 *does not have a consistent HLPR.*

Note that: when a HLPR *K* = (*kij*)*n*×*n* does not have the corresponding consistent HLPR, it should be adjusted based on Equation (14) until a consistent HLPR exists.

**Theorem 3.** *Assume a HLPR K* = (*kij*)*n*×*nhas the consistent HLPR, for all i*, *j*, *k* = 1, 2, . . . *n, if*

$$r\_{ij}^{\*\sigma^{\tau}(l)} \cdot lv\_{ij}^{\*} = \frac{1}{n} \oplus\_{k=1}^{n} (r\_{ik}^{\sigma(l)} \cdot lv\_{ik} \oplus\_{\mathcal{X}u} r\_{kj}^{\sigma(l)} \cdot lv\_{kj})\_{\prime} \tag{14}$$

$$\text{lv}\_{ij}^{\*} = \max \left\{ r\_{1k}^{\sigma(1)} \cdot lv\_{1k} \oplus r\_{k3}^{\sigma(1)} \cdot lv\_{k3} \right\} \text{ (if } \oplus\_{k=1}^{n} \left( r\_{1k}^{\sigma(1)} \cdot lv\_{1k} \oplus r\_{k3}^{\sigma(1)} \cdot lv\_{k3} \right) > lv\_0 \text{)} \tag{15}$$

$$\operatorname{lw}\_{ij}^{\*} = \min \left\{ r\_{1k}^{\sigma(2)} \cdot lv\_{1k} \oplus r\_{k3}^{\sigma(2)} \cdot lv\_{k3} \right\} \text{ (if } \oplus \mathop{\to}\_{k=1}^{n} \left( r\_{1k}^{\sigma(2)} \cdot lv\_{1k} \oplus r\_{k3}^{\sigma(2)} \cdot lv\_{k3} \right) < \operatorname{lw}\_{0} \tag{16}$$

*then K*∗ = (*k*<sup>∗</sup>*ij*)*n*×*n* = (*lv*<sup>∗</sup>*ij*,*r*<sup>∗</sup>*ij*)*n*×*n is a consistent HLPR.*

**Proof.** Since *r*<sup>∗</sup>*<sup>σ</sup>*(*l*) *ik* · *lv*<sup>∗</sup>*ik* ⊕ *r*<sup>∗</sup>*<sup>σ</sup>*(*l*) · *lv*<sup>∗</sup>*kj* = 1*n* (⊕*ne*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ie* · *lvie* ⊕ *r<sup>σ</sup>*(*l*) *ek* · *lvek*)) ⊕ 1*n* (⊕ *ne*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ke* · *lvke* ⊕ *r<sup>σ</sup>*(*l*) *ej* · *lvej*)) = 1*n* (⊕ *ne*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ie* · *lvie* <sup>⊕</sup>*r<sup>σ</sup>*(*l*) *ek* · *lvek* <sup>⊕</sup>*r<sup>σ</sup>*(*l*) *ke* · *lvke* <sup>⊕</sup>*r<sup>σ</sup>*(*l*) *ej* · *lvej*)) = 1*n* (⊕ *ne*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ie* · *lvie* <sup>⊕</sup>*r<sup>σ</sup>*(*l*) *ej* · *lvej* <sup>⊕</sup>*r<sup>σ</sup>*(*l*) *ek* · *lv*0)) = 1*n* (⊕*ne*=<sup>1</sup>(*r<sup>σ</sup>*(*l*) *ie* · *lvie* ⊕ *r<sup>σ</sup>*(*l*) *ej* · *lvej*)) = *r<sup>σ</sup>*(*l*) *ij* · *lv*∗*ij* based on Definition 13, *K*∗ = (*k*<sup>∗</sup>*ij*)*n*×*n* = (*lv*<sup>∗</sup>*ij*,*r*<sup>∗</sup>*ij*)*n*×*n* is a consistent HLPR. -

**Example 5.** *Assume a HLPR is the same in Example 3. Based on Equation (14), the consistent HLPR K*∗1*is obtained*

$$\text{as follows:}\ N\_1^\* = \left\lceil \begin{array}{c} < lv\_0 \left\{1\right\}> &< lv\_{-1} \left\{2/15, 7/15\right\}> &< lv\_{1.1} \left\{7/33, 1/11\right\}> \\ < lv\_1 \left\{2/15, 7/15\right\}> &< lv\_0, \left\{1\right\}> &< lv\_{0.8} \left\{11/24, 5/8\right\}> \\ < lv\_{-1.1\*} \left\{7/33, 1/11\right\}> &< lv\_{-0.8\*} \left\{11/24, 5/8\right\}> &< lv\_0, \left\{1\right\}> \end{array} \right\}.$$

#### *4.2. Consistency Checking and Improving Models*

When an initial preference matrix is constructed, checking and improving its consistency is necessary and vital [50–52]. The consistency of preference relations reflects the rationality of DMs' judgments, and inconsistent preference matrices may generate undesirable or improper conclusions. In this section, a likelihood-based consistency index is defined to test the consistency degree and a consistency-improving process is presented to modify the consistency level.

**Definition 14.** *Given two arbitrary HLPRs A* = (*aij*)*n*×*nand B* = (*bij*)*n*×*n, then*

$$L(A \ge B) = \frac{2}{n(n-1)} \sum\_{i$$

*is called the likelihood between two HLPRs.*

> The likelihood *L*(*A* ≥ *B*) satisfies Theorem 4 as follows.

**Theorem 4.** *Assume A and B are two HLPRs, the likelihood between them can be represented as L*(*A* ≥ *<sup>B</sup>*)*, then*

*(1)* 0 ≤ *L*(*A* ≥ *B*) ≤ 1;

*(2) L*(*A* ≥ *B*) + *L*(*B* ≥ *A*) = 1;

*(3) If L*(*A* ≥ *B*) = *L*(*B* ≥ *<sup>A</sup>*)*, then L*(*A* ≥ *B*) = *L*(*B* ≥ *A*) = 0.5.

**Definition 15.** *Suppose a HLPR K and its corresponding consistent HLPR K*<sup>∗</sup>*; a consistency index is used to calculate the deviation between K and K*<sup>∗</sup>*, which is defined as*

$$CI(K) = \frac{1}{n(n-1)} \sum\_{i \neq j}^{n} |L(k\_{ij} \ge k\_{ij}^\*) - \frac{1}{2}| \tag{18}$$

It is true that 0 ≤ *C I*(*K*) ≤ 12 . Based on Definition 15, a smaller value of *C I*(*K*) means a more consistent HLPR *K*. As the DMs would be often influenced by many uncertainties when they make decisions, HLPRs provided by the DMs are not always perfectly consistent.

**Definition 16.** *Given a HLPR K and the corresponding threshold value C I, when the consistency index meets:*

$$
\mathbb{C}I(\mathbb{K}) < \mathbb{C}I \tag{19}
$$

*then K is regarded as a HLPR whose consistency is acceptable.*

Note: There is an attractive subject about how to determine the value of *C I*. It may be confirmed in accordance with the DMs' knowledge, experience and other conditions.

In some circumstances, the HLPR *K* constructed by the DMs is always with unacceptable consistency due to the lack of knowledge or other reasons. Hence, a consistency-improving model is built to acquire a reasonable solution. Some critical steps in Algorithm 1 can be taken repeatedly until the predefined consistency threshold is satisfied.

The main steps of this consistency-improving process are shown as follows.

**Algorithm 1.** Consistency improving model of HLPRs

**Input**: The original HLPR *K* = (*kij*)*n*×*n*, the threshold value *C I* = *C I*0 and the maximum number of iterative times *s*max ≥ 1. **Output**: The adjusted HLPR *Ka* and its consistency index *C <sup>I</sup>*(*Ka*). Step 1: Let the iterative times *s* = 0, and the original HLPR *K* = *K*(0) = (*k*(0) *ij* )*n*×*n*. Step 2: According to Equation (14), obtain the corresponding consistent HLPR *K*∗(*s*) = (*k*<sup>∗</sup>(*s*) *ij* )*n*×*n* = (< *lv*<sup>∗</sup>(*s*) *ij* ,*r*<sup>∗</sup>(*s*) *ij* <sup>&</sup>gt;)*n*×*n* of HLPR *K*(*s*) = (*k*(*s*) *ij* )*n*×*n*. Step 3: Based on Equation (10), calculate the likelihood *L*(*k*(*s*) *ij* ≥ *k*<sup>∗</sup>(*s*) *ij* ) of the corresponding elements (e.g., *k*(*s*) *ij* and *k*<sup>∗</sup>(*s*) *ij* ) in the HLPR *K*(*s*) = (*k*(*s*) *ij* )*n*×*n* and its consistent HLPR *K*∗(*s*) = (*k*<sup>∗</sup>(*s*) *ij* )*n*×*n*. Then, construct the likelihood matrix *L*(*s*) = (*l*(*s*) *ij* )*n*×*n* = (*L*(*k*(*s*) *ij* ≥ *k*<sup>∗</sup>(*s*) *ij* ))*n*×*n* of HLPR *K*(*s*). Step 4: Calculate the consistency index *C I*(*K*(*s*)) of HLPR *K*(*s*) by Equation (18). Step 5: If the consistency level of *K*(*s*) is acceptable, namely *C I*(*K*(*s*)) < *C I*0 or the iterative times is maximum, namely *s* > *s*max, then go to Step 7; or else, go to the next step. Step 6: Find an element *l*(*s*) *ij* in the likelihood matrix *L*(*s*) = (*l*(*s*) *ij* )*n*×*n*, which has the maximum deviation on the diagonal, namely max<sup>|</sup>*l*(*s*) *ij* − 12 | + |*l*(*s*) *ji* − 12 |. If *l*(*s*) *ij* + *l*(*s*) *ij* − 1 < 0, then the DMs may increase their preference of *k*(*s*) *ij* ; if *l*(*s*) *ij* + *l*(*s*) *ij* − 1 > 0, then the DMs can decrease their values of *k*(*s*) *ij* . And the modified HLPR is denoted as *K*(*s*+<sup>1</sup>) = (*k*(*s*+<sup>1</sup>) *ij* )*n*×*n* = (< *lv*(*s*+<sup>1</sup>) *ij* ,*<sup>r</sup>*(*s*+<sup>1</sup>) *ij* <sup>&</sup>gt;)*n*×*n*. Let *s* = *s* + 1, then return to Step 2. Step 7: Let the final adjusted HLPR *K*(*s*) = *Ka*, Output *Ka* and its consistency index *C <sup>I</sup>*(*Ka*).

**Theorem 5.** *Given a HFPR K, which is unacceptably consistent. If C I* = *C I*0 *is the consistency threshold, K*(*s*) *is a HFPR sequence, and C I*(*K*(*s*)) *is the consistency index of K*(*s*)*. Therefore, we can obtain that for any s: C I*(*K*(*s*+<sup>1</sup>)) < *C I*(*K*(*s*)) *and* lim*s*→∞*<sup>C</sup> I*(*K*(*s*)) = 0.

The proof is straightforward. There is no less than one position where |*l*(*s*) *ij*1 − 12 | < |*l*(*s*+<sup>1</sup>) *ij*1 − 12 | can be obtained. It follows that *C I*(*K*(*s*+<sup>1</sup>)) < *C <sup>I</sup>*(*K*(*s*)).

Theorem 5 guarantees that any HLPR with insupportable consistency can be converted into an acceptable HLPR. The speed and times may be influenced by the values of the adjusted elements, which are recommended by the DMs or specialists according to the practical situation. How to determine the value of adjusted elements more reasonably is also a controversial issue and deserves to be further investigated.

$$\text{Example 6. Given an original HLP } K = \left[ < l\text{v}\_{0}, \{1\}>< l\text{v}\_{3}, \{0.6, 0.7\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.6, 0.9\}>< l\text{v}\_{-2}, \{0.6, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}, \{0.8, 0.9\}>< l\text{v}\_{-2}$$

*Suppose the threshold CI*0 = 0.25 *and the maximum number of iterative times s*max = 3*, check and improve its consistency. The detailed procedures are listed as follows.*

*Step 1: Let s* = 0 *and K*(0) = *K*.

*Step 2: Based on Equation (14), obtain the consistent HLPR*

$$K^{\*(0)} = \begin{bmatrix} &  & & lv\_{2,1}, \{2/7, 1/3\}> & &  \\ &  & &  & &  \\ & &  & &  & &  \\ & & & & &  & &  \\ \end{bmatrix}.$$
 
$$\text{Step 3: Based on Equation (10), the likelihood matrix is } L^{(0)} = \begin{bmatrix} 0.5 & 1 & 0.7762 \\ 0.5249 & 0.5 & 0.9781 \\ 1 & 0.2590 & 0.5 \end{bmatrix}.$$
 
$$\text{Sum 4: Based on Function (19), calculate the continuous index } C(\mathbb{X}(\mathbb{R}^{(0)})) \text{ or } 0.2544.$$

*Step 4: Based on Equation (18), calculate the consistency index C I*(*K*(0)) ≈ 0.2534. *Step 5: Since C I*(*K*(0)) > *C I*0*, then go to the next step. Step 6: Since l*(0) 13 = max<sup>|</sup>*l*(0) *ij* − 12 | + |*l*(0) *ji* − 12 | *and l*(0) 13 + *l*(0) 31 − 1 > 0*, then the DMs decrease their preference. The modified HLPR is K*(1) = ⎡⎢⎣ < *lv*0, {1} > < *lv*3, {0.6, 0.7} > < *lv*−2, {0.1, 0.2} > < *lv*−3, {0.6, 0.7} > < *lv*0, {1} > < *lv*1, {0.2, 0.3} > < *lv*2, {0.1, 0.2} > < *lv*−1, {0.2, 0.3} > < *lv*0, {1} > ⎤⎥⎦ *and CI*(*K*(1)) ≈ 0.2230 < *CI*0. *Step 7: Let Ka* = *<sup>K</sup>*(1)*, Output Ka* = ⎡⎢⎣ < *lv*0, {1} > < *lv*3, {0.6, 0.7} > < *lv*−2, {0.1, 0.2} > < *lv*−3, {0.6, 0.7} > < *lv*0, {1} > < *lv*1, {0.2, 0.3} > < *lv*2, {0.1, 0.2} > < *lv*−1, {0.2, 0.3} > < *lv*0, {1} > ⎤⎥⎦

*and C I*(*Ka*) ≈ 0.2230.

#### *4.3. Likelihood-Based Ranking Method*

As the likelihood between two HFLNs is a useful tool to make comparisons, a likelihood-based method is introduced to derive a ranking from the consistent HLPRs in this section.

Pondering over the decision making problem within the hesitant fuzzy linguistic context, assume that the DMs' plan to select the optimal alternative or ge<sup>t</sup> a ranking order from *n* objects. Let *X* = {*<sup>x</sup>*1, *x*2,..., *xn*} be a discrete set of alternatives being chosen and *K* = (*kij*)*n*×*n* (*i*, *j* = 1, 2, ... , *n*) is the preference matrix, where *kij* is the preference value in the form of HFLNs. The entire procedures of earning the ideal order of alternatives are shown in Algorithm 2.

#### **Algorithm 2.** Likelihood-based ranking method

**Input**: The initial HLPR *K* = (*kij*)*n*×*n*.

**Output**: The optimal alternative *<sup>x</sup>*∗.

Step 1: Obtain the acceptable HLPR *Ka* by Algorithm 1.

Step 2: Utilize the HFLA operator based on Equation (8) to aggregate each row of the HLPR *Ka*, then

determine the overall preference degree *pi* of each alternative *xi* (*i* = 1, 2, . . . , *n*).

Step 3: According to Equation (10), calculate the likelihood *lij* = *<sup>L</sup>*(*pi* ≥ *pj*) between *pi* and *pj* (*i* = 1, 2, . . . , *n*, *j* = 1, 2, . . . , *n*), then construct a likelihood matrix *L* = (*lij*)*n*×*n*.

Step 4: Calculate the dominance degree *ϕ*(*xi*) = 1*n* <sup>∑</sup>*nj*=<sup>1</sup> *lij* of alternative *xi*(*<sup>i</sup>* = 1, 2, . . . , *<sup>n</sup>*), where *ϕ*(*xi*) represents the degree of *xi* preferred to other alternatives. Obviously, the greater the value of *ϕ*(*xi*), the better the alternative *xi*.

Step 5: Rank all the alternatives on the basis of the dominance degree *ϕ*(*xi*) of each alternative *xi*(*i* = 1, 2, . . . , *n*). Then obtain the ranking results and the optimal alternative(s) is denoted as *<sup>x</sup>*∗.

#### **5. Selection of Mine Ventilation Systems**

In this section, an example of mine ventilation systems selection is afforded for voicing the application of the suggested method.

Sanshandao gold mine is the first subsea hard rock mine in China, which lies in Sanshandao Town, Laizhou City, Shandong Province, China [53]. As the mine is going into the stage of deep exploitation, the distance of ventilation becomes longer and the temperature also rises severely. Therefore, some problems are beginning to appear after using the traditional ventilation systems, for instance, the temperature is so high that laborers find it hard to work efficiently; exhaust gas

emitted by diesel equipment pollutes underground air seriously; and the concentration of dust exceeds the national standard. Accordingly, a better ventilation system needs to be adopted.

After a thorough survey, four ventilation systems, i.e., {*vs*1, *vs*2, *vs*3, *vs*4}, are under consideration, and a group of professionals are invited to select the optimal ventilation system. The linguistic term set *lv* = {*lv*−<sup>4</sup> = *tremendously worse*, *lv*−<sup>3</sup> = *a lot worse*, *lv*−<sup>2</sup> = *worse*, *lv*−<sup>1</sup> = *a little worse*, *v*0 = *f air*, *lv*1 = *a little better*, *lv*2 = *better*, *lv*3 = *a lot better*, *lv*4 = *tremendously better*} is used. The preference values are shown in the form of HFLNs. Suppose all DMs have a consensus on the selected linguistic term, and all teams provided their membership degrees (preference) in line with the researches of the above four systems and their preference simultaneously. Then, all of the probable membership degrees are gathered with the previous linguistic set. When a team does not give a membership degree, we consider it as 0.5. And when the same membership degrees about identical linguistic terms are given, we may regard them as different data in a HFLN.

Consequently, after a heated discussion, experts decided the threshold value *C I*0 = 0.18 and the maximum number of iterative times *s*max = 3. Then, the preference information was given in Table 1.


**Table 1.** Original HLPR *VS*.

## *5.1. Illustrative Example*

Steps outlined in Section 4.3 are completed to ge<sup>t</sup> satisfied ventilation system(s) in this section. Step 1: Obtain the acceptable HLPR *VSa* by Algorithm 1.

Based on Equation (14), the consistent HLPR *VS*∗ is shown in Table 2. And the likelihood matrix *L*(0) is calculated based on Equation (10), as shown in Table 3. Then, calculate the consistency index *C I*(*VS*(0)) ≈ 0.1956 > 0.18 by Equation (18). Since *l* (0) 34 = max |*l* (0) *ij* − 1 2 | + |*l* (*s*) *ji* − 1 2 | and *l* (0) 34 + *l* (0) 43 − 1 > 0, then the DMs decrease their preference, and the modified HLPR *VS*(1) is in Table 4. Since *C I*(*VS*(1)) ≈ 0.1754 < 0.18, let *VSa* = *VS*(1).



#### **Table 3.** Likelihood matrix *L*(0).



**Table 4.** Modified HLPR *VS*(1).

Step 2: Utilize the HFLA operator based on Equation (8) to aggregate each row of the HLPR *VSa*, then the overall preference degree *pi* of each alternative is acquired as follows:

*p*1 = (*lv*1.5, 0.4182, 0.4455, 0.4500, 0.4636, 0.4773, 0.4909, 0.4955, 0.5091, , 0.5227, 0.5364, 0.5409, 0.5455, 0.5545, 0.5682, 0.5727, 0.5864, 0.5909, 0.6000, 0.6182, 0.6318, 0.6364, 0.6455, 0.6636, 0.6773, 0.6818, 0.7273, 0.7727) , *p*2 = (*lv*−0.5, 0.4429, 0.4500, 0.4571, 0.4643, 0.4714, 0.4857, 0.5000, 0.5071, 0.5286, 0.5929, 0.6000, 0.6071, 0.6143, 0.6214, 0.6357, 0.6429, 0.6500, 0.6500, 0.6571, 0.6571, 0.6643, 0.6714, 0.6786, 0.6857, 0.7000, 0.7071, 0.7286) , *p*3 = (*lv*0, 0.4563, 0.4750, 0.4938, 0.4938, 0.4938, 0.5125, 0.5125, 0.5313, 0.5313, 0.5313, 0.5313, 0.5500, 0.5500, 0.5688, 0.5688, 0.5688, 0.5875, 0.6063, 0.6063, 0.6250, 0.6438, 0.6438, 0.6625, 0.6813, 0.6813, 0.7000, 0.7188) , and *p*4 = (*lv*−1, 0.4417, 0.4583, 0.4667, 0.4750, 0.4833, 0.4833, 0.4917, 0.5000, 0.5083, 0.5167, 0.5250, 0.5250, 0.5250, 0.5333, 0.5417, 0.5500, 0.5500, 0.5583,

0.5583, 0.5667, 0.5667, 0.5750, 0.5917, 0.6000, 0.6083, 0.6333, 0.6417) .

Step 3: According to Equation (10), calculate the likelihood between *pi* and *pj* (*i* = 1, 2, 3, 4, *j* = 1, 2, 3, 4), then the likelihood matrix *L* = (*lij*) 4×4 is constructed in Table 5.


**Table 5.** Likelihood matrix *L*.

Step 4: Calculate the dominance degree of each alternative with *ϕ*(*vsi*) = 1 4∑<sup>4</sup> *j*=1 *lij*(*i* = 1, 2, 3, 4) as: *ϕ*(*vs*1) ≈ 0.5836, *ϕ*(*vs*2) ≈ 0.4841, *ϕ*(*vs*3) ≈ 0.5093, *ϕ*(*vs*4) ≈ 0.4231.

Step 5: Since *ϕ*(*vs*1) > *ϕ*(*vs*3) > *ϕ*(*vs*2) > *ϕ*(*vs*4), then the ranking is *vs*1 *vs*3 *vs*2 *vs*4 and the optimal system is *vs*<sup>∗</sup> = *vs*1.

## *5.2. Comparative Analysis*

Since the HLPR presented in this paper is a new type of preference relation, no related researches have been conducted so far. To testify the validity and advantages of the proposed method, several methods for HFLPRs [23–28] can be made for comparisons.

Note: The definitions of HLPRs and HFLPRs are not the same. The basic elements in HLPRs are HFLNs, whereas those in HFLPRs are HFLTS. As a result, each HFLN in the HLPR should be transformed into the corresponding HFLTS by using the linguistic term multiplies the corresponding membership degrees successively. For example, < *lv*2, {0.3, 0.5} > can be converted into {*lv*0.6, *lv*1}. Then, a same illustration is applied in these methods and detailed comparisons are provided in Table 6.


**Table 6.** Comparisons with different methods.

#### (1) Comparison with literature [24,25,27]

In literature [24], Wang and Xu provided a visible interpretation of additive consistency and weak consistency of extended HFLPRs based on graph theory. In literature [27], Li et al. defined an interval consistency index of HFLPRs based on the linear programming model. However, the methods of improving consistency degrees and getting ranking orders are not mentioned in literature [24,27]. Thus, the rankings are unavailable in these cases. In literature [25], Wu and Xu discussed some issues of HFLPRs on consistency and consensus, and defined a consistency index based on distance measure. Nevertheless, dissimilar ranking results may occur with different adjust preference when the feedback mechanism was adopted to improve the consistency level in this literature [25]. Note: Feedback mechanisms are presented to improve the consistency level of preference relations in both literature [25] and this paper. Different from existing feedback approaches [25], people can directly adjust their preference with our method according to the values of elements in the likelihood matrix. (2)Comparisonwithliterature[23,26,28]

From Table 6, it is clear that the best alternative in different methods is always *vs*1, which reveals the effectiveness of the proposed method. In literature [26], Gou et al. defined the consistency index on the basis of compatibility measure and then go<sup>t</sup> the ranking result based on a complementary matrix; however, the approach of improving consistency level of HFLPRs was not given. Even though the rankings obtained in literature [28] and this paper are the same, there are still some differences between these two methods. First, in literature [23,28], a consistency index based on distance measure was defined to check consistency level of HFLPRs, while a likelihood-based index is suggested in this paper. Compared with compatibility or distance measure, the largest advantage of the likelihood is that not only the deviation degree, but also that the order relationship of two elements can be directly indicated. Second, compared with automatic iterative algorithms [23,28], the feedback mechanism proposed in this paper reduces the loss of original information, and DMs can understand their current status in each round. Besides, an approach of using aggregation operators and then calculating score functions was adopted to ge<sup>t</sup> the ranking order in both literature [23,28]. By contrast, a likelihood matrix is constructed in this paper to avoid the second calculations and information distortions.

The advantages of the proposed approach are summarized as follows:


Overall, the proposed method brings up a new and useful way to resolve complex fuzzy decision making issues under a hesitant linguistic environment, especially when experts or decision makers (DMs) readily make comparisons among each pair of alternatives but hardly provide direct evaluation information.
