**2. Preliminaries**

In this section, we give some concepts and operations related to HFLTSs, HIFLTSs and PLTSs that will be used in coming sections.

#### *2.1. Hesitant Fuzzy Linguistic Term Set*

The DMs may face such a problem where they hesitate with certain possible values. For this purpose, Rodriguez et al. [15] introduced the following concept of hesitant fuzzy linguistic term set (HFLTS).

**Definition 1** ([15])**.** *Let S* = {*<sup>s</sup>α*; *α* = 0, 1, 2, . . . , *g*} *be a linguistic term set; then, HFLTS, HS, is a finite and ordered subset of the consecutive linguistic terms of S.*

$$\textbf{Example 1.}\text{ Let } S = \left\{ \begin{array}{l} s\_0 = \text{extremely power}, s\_1 = \text{very power}, s\_2 = \text{medium}, s\_4 = \text{goad}, s\_5 = \text{very good}, \\ s\_6 = \text{extremely good} \end{array} \right\}$$

$$\text{The } a \text{ linguistic term set. Then, two different HFTSs may be defined as:}$$

$$H\_S(\textbf{x}) = \{ s\_1 = \text{very power}, s\_2 = \text{power}, s\_3 = \text{medium}, s\_4 = \text{good} \} \text{ and } H\_S(\textbf{y}) = \{ s\_3 = \text{medium}, s\_4 = \text{geod}, s\_5 = \text{geod}, s\_6 = \text{row} \}.$$

**Definition 2** ([15])**.** *Let S* = {*<sup>s</sup>α*; *α* = 0, 1, 2, . . . , *g*} *be an ordered finite set of linguistic terms and E be an ordered finite subset of the consecutive linguistic terms of S. Then, the operators "*max*" and "*min*" on E can be defined as follows:*

*(i)* max (*E*) = max (*sl*) = *sm ; sl* ∈ *E and sl* ≤ *sm* ∀*l*

*(ii)* min (*E*) = min (*sl*) = *sn ; sl* ∈ *E and sl* ≥ *sn* ∀*l*.

#### *2.2. Hesitant Intuitionistic Fuzzy Linguistic Term Set*

In 2014, Beg and Rashid [28] introduced the concept of hesitant intuitionistic fuzzy linguistic term set (HIFLTS). This concept is actually based on HFLTS and intuitionistic fuzzy set.

**Definition 3** ([28])**.** *Let X be a universe of discourse, and S* = {*<sup>s</sup>α*; *α* = 0, 1, 2, . . . , *g*} *be a linguistic term set, then HIFLTS on X are two functions h and h that when applied to an element of X return finite and ordered subsets of consecutive linguistic terms of S, this can be presented mathematically as:*

$$A = \left\{ \left< \mathbf{x}, h\left(\mathbf{x}\right), h'\left(\mathbf{x}\right) \right> \left| \mathbf{x} \in X \right> \right\},$$

*where h* (*x*) *and h* (*x*) *denote the possible membership and non-membership degree in terms of consecutive linguistic terms of the element x* ∈ *X to the set A such that the following conditions are satisfied:*

*(i)* max (*h* (*x*)) + min *h* (*x*)≤ *sg*; *(ii)*min(*h*(*x*))+max*h* (*x*) ≤*sg*.

#### *2.3. Probabilistic Linguistic Term Sets*

Recently, in 2016, Pang et al. [19] introduced the concept of PLTSs by attaching probabilities with each linguistic term, which is basically the generalization of HFLTS, and thus they opened a new dimension of research in decision theory.

**Definition 4** ([19])**.** *Let S* = {*<sup>s</sup>α*; *α* = 0, 1, 2, . . . , *g*} *be a linguistic term set, then a PLTS can be presented as follows:*

$$L\left(p\right) = \left\{ L^{(i)}\left(p^{(i)}\right) \mid L^{(i)} \in \mathcal{S}, \ p^{(i)} \ge 0 \; i = 1, 2, \ldots, \#L\left(p\right), \sum\_{i=1}^{\#L(p)} p^{(i)} \le 1\right\}.\tag{1}$$

*where L*(*i*) *p*(*i*)*is the ith linguistic term L*(*i*) *associated with the probabilityp*(*i*), *and* #*L* (*p*) *denotes the number of linguistic terms in L* (*p*).

**Definition 5** ([19])**.** *Let L* (*p*) = *L*(*i*) *p*(*i*); *i* = 1, 2, . . . , #*L* (*p*)*, r*(*i*) *be the lower index of linguistic term <sup>L</sup>*(*i*)*, L* (*p*) *is called an ordered PLTS, if all the elements L*(*i*) *p*(*i*) *in L* (*p*) *are ranked according to the values of r*(*i*) × *p*(*i*) *in descending order.*

However, in a PLTS, it is possible for two or more linguistic terms with equal values of *r*(*i*) × *<sup>p</sup>*(*i*). Taking a PLTS *L* (*p*) = {*<sup>s</sup>*1 (0.4),*s*<sup>2</sup> (0.2),*s*<sup>3</sup> (0.4) }, here *r*(1) × *p*(1) = *r*(2) × *p*(2) = 0.4

According to the above rule, these two values cannot be arranged. To handle such type of problem, Zhang et al. [29] defined the following ranking rule.

**Definition 6** ([29])**.** *Let L* (*P*) = *L*(*i*) *p*(*i*); *i* = 1, 2, . . . , #*L* (*p*)*, r*(*i*) *be the lower index of linguistic term L*(*i*)*.*


*(a) When the lower indicesr*(*i*) (*i* = 1, 2, . . . , #*L* (*p*)) *are unequal, arranger*(*i*) *p*(*i*)(*i* = 1, 2, . . . , #*L* (*p*)) *according to the values of r*(*i*) (*i* = 1, 2, . . . , #*L* (*p*)) *in descending order.*

> *(b) When the lower indices r*(*i*) (*i* = 1, 2, . . . , #*L* (*p*)) *are incomparable, arrange r*(*i*) *p*(*i*) (*i* = 1, 2, . . . , #*L* (*p*)) *according to the values of p*(*i*) (*i* = 1, 2, . . . , #*L* (*p*)) *in descending order.*

**Definition 7** ([19])**.** *Let L* (*p*) *be a PLTS such that* #*<sup>L</sup>*(*p*) ∑ *i*=1 *p*(*i*) < 1*, then the associated PLTS is denoted and defined as*

$$L^\*(p) = \left\{ L^{(i)}\left(p^{\*^{(i)}}\right) \; ; \; i = 1, 2, \ldots, \#L^\*(p) \right\} \tag{2}$$

=

*where p* (*i*) = *p*(*i*) #*<sup>L</sup>*(*p*) ∑ *i*=1 *p*(*i*) , ∀*i* = 1, 2, . . . , #*L* (*p*).

**Definition 8** ([19])**.** *Let L*1 (*p*) = *L*(*i*) 1 *p*(*i*) 1 ; *i* = 1, 2, . . . , #*L*1 (*p*) *and L*2 (*p*) = *L*(*i*) 2 *p*(*i*) 2 ; *i* = 1, 2, . . . , #*L*2 (*p*) *be two PLTSs, where* #*L*1 (*p*) *and* #*L*2 (*p*) *denote the number of linguistic terms in L*1 (*p*) *and L*2 (*p*)*, respectively. If* #*L*1 (*p*) > #*L*2 (*p*)*, then* #*L*1 (*p*) − #*L*2 (*p*) *linguistic terms will be added to L*2 (*p*) *so that the number of elements in L*1 (*p*) *and L*2 (*p*) *becomes equal. The added linguistic terms are the smallest one's in L*2 (*p*) *and the probabilities of all the linguistic terms are zero.*

$$\text{Let } L\_1(p) = \left\{ L\_1^{(i)}\left(p\_1^{(i)}\right) \; ; \; i = 1, 2, \dots, \#L\_1(p) \right\} \text{ and } L\_2\left(p\right) = \left\{ L\_2^{(i)}\left(p\_2^{(i)}\right) \; ; \; i = 1, 2, \dots, \#L\_2\left(p\right) \right\}.$$

$$\text{If } \ell\_1, \dots, \ell\_i, \text{ then } L\_1, L\_2, \dots, L\_i \text{ are } \left\{ \begin{array}{c} \dots \text{(i)} \\ \boxed{\dots} \end{array} \right\} / \left\{ \begin{array}{c} \dots \text{(i)} \\ \dots \end{array} \right\} \text{ and } L\_1 \left(p\right) = \left\{ \begin{array}{c} \dots \text{(i)} \\ \dots \end{array} \right\} / \left\{ \begin{array}{c} \dots \text{(i)} \\ \dots \end{array} \right\}.$$

$$\begin{aligned} \text{then the Normalized PLTS denoted by } \overrightarrow{L\_1}(p) &= \left\{ \overrightarrow{L\_1^{(i)}} \left( p\_1^{(i)} \right) \; ; \; i = 1, 2, \dots, \#L\_1 \; (p) \right\} \text{ and } \overrightarrow{L\_2} \; (p) \\\ \left\{ \overrightarrow{L\_2^{(i)}} \left( p\_2^{(i)} \right) \; ; \; i = 1, 2, \dots, \#L\_2 \; (p) \right\} \text{ can be obtained according to the following two steps:} \end{aligned}$$


The deviation degree between PLTSs, which is analogous to the Euclidean distance between hesitant fuzzy sets [30] can be defined as:

**Definition 9** ([19])**.** *Let L*1 (*p*) = *L*(*i*) 1 *p*(*i*) 1 ; *i* = 1, 2, . . . , #*L*1 (*p*) *and L*2 (*p*) = *L*(*i*) 2 *p*(*i*) 2 ; *i* = 1, 2, . . . , #*L*2 (*p*) *be two PLTSs, where* #*L*1 (*p*) *and* #*L*2 (*p*) *denote the number of linguistic terms in L*1 (*p*) *and L*2 (*p*)*, respectively, with* #*L*1 (*p*) = #*L*2 (*p*)*. Then, the deviation degree between these two PLTSs can be defined as*

$$d\left(L\_1\left(p\right), L\_2\left(p\right)\right) = \sqrt{\frac{1}{\#L\_1\left(p\right)}\sum\_{i=1}^{L\_1(p)} \left(p\_1^{(i)}r\_1^{(i)} - p\_2^{(i)}r\_2^{(i)}\right)^2} \tag{3}$$

*where r*(*i*) 1 *and r*(*i*) 2 *denote the lower indices of linguistic terms L*(*i*) 1 *and L*(*i*) 2 *, respectively.*

For further detail of PLTS, one can see Ref. [19].

#### **3. Probabilistic Hesitant Intuitionistic Linguistic Term Set**

Although HIFLTS allow the DM to state his assessments by using several linguistic terms, it cannot reflect the probabilities of the assessments of DM.

To overcome this issue, in this section, the concept of probabilistic hesitant intuitionistic linguistic term set (PHILTS) which is based on the concept of HIFLTS and PLTS is proposed. Furthermore, some basic operations for PHILTS are also designed.

**Definition 10.** *Let X be a universe of discourse, and S* = {*<sup>s</sup>α*; *α* = 0, 1, 2, . . . , *g*} *be a linguistic term set, then a* PHILTS *on X are two functions l and l that when applied to an element of X return finite and ordered subsets of the consecutive linguistic terms of S along with their occurrence probabilities, which can be mathematically expressed as*

$$A\left(p\right) = \left\{ \begin{array}{l} \left< \mathbf{x}, l\left(\mathbf{x}\right)\left(p\left(\mathbf{x}\right)\right) = \left\{ l^{(i)}\left(\mathbf{x}\right)\left(p^{(i)}\left(\mathbf{x}\right)\right) \right\}, l^{\prime}\left(\mathbf{x}\right)\left(p^{\prime}\left(\mathbf{x}\right)\right) = \left\{ l^{(i)}\left(\mathbf{x}\right)\left(p^{\prime}\left(\mathbf{x}\right)\right) \right\} \right\} \\ \left| p^{\left(i\right)}\left(\mathbf{x}\right) \ge 0, i = 1, 2, \dots, \#l\left(\mathbf{x}\right)\left(p\left(\mathbf{x}\right)\right), \sum\_{i=1}^{\#l\left(\mathbf{x}\right)\left(p^{\prime}\left(\mathbf{x}\right)\right)} p^{i}\left(\mathbf{x}\right) \le 1 \text{ \&} \\ \left. p^{\prime\left(i\right)}\left(\mathbf{x}\right) \ge 0, j = 1, 2, \dots, \#l\left(\mathbf{x}\right)\left(p^{\prime}\left(\mathbf{x}\right)\right), \sum\_{j=1}^{\mathbb{P}} \quad p^{\prime\left(i\right)}\left(\mathbf{x}\right) \le 1 \end{array} \right. \\ \left(4)\right)$$

*where l* (*x*) (*p* (*x*)) *and l* (*x*) *p* (*x*) *are the PLTSs, denoting the membership and non-membership degree of the element x* ∈ *X to the set A* (*p*) *such that the following two conditions are satisfied:*

*(i)* max (*l* (*x*)) + min *l* (*x*) ≤ *sg*; *(ii)* min (*l* (*x*)) + max *l* (*x*) ≤ *sg.*

For the sake of simplicity and convenience, we call the pair *A* (*x*) (*p* (*x*)) = *l* (*x*) (*p* (*x*)), *l* (*x*) *p* (*x*) as the intuitionistic probabilistic linguistic term element (PHILTE), denoted by *A* (*p*) = *l* (*p*), *l p* for short.

**Remark 1.** *Particularly, if the probabilities of all linguistic terms in membership part and non-membership part become equal, then PHILTE reduces to HIFLTE.*

**Example 2.** *Let S* = 3 *s*0 = *extremely poor*,*s*1 = *very poor*,*s*2 = *poor*,*s*3 = *medium*,*s*<sup>4</sup> = *good*,*s*<sup>5</sup> = *very good* , *s*6 = *extremely good* 4 *be a linguistic term set. A PHILTS is A* (*p*) = {*<sup>x</sup>*1, {*<sup>s</sup>*1 (0.4),*s*<sup>2</sup> (0.1),*s*<sup>3</sup> (0.35)} , {*<sup>s</sup>*3 (0.3),*s*<sup>4</sup> (0.4)},*<sup>x</sup>*2, {*<sup>s</sup>*4 (0.33),*s*<sup>5</sup> (0.5)} , {*<sup>s</sup>*1 (0.2),*s*<sup>2</sup> (0.45)}}

One can easily check the conditions of PHILTS for *A* (*p*).

To illustrate the PHILTS more straightforwardly, in the following, a practical life example is given to depict the difference between the PHILTS and HIFLTS:

**Example 3.** *Take the evaluation of a vehicle on the comfortable degree attribute/criteria as an example. Let S be a linguistic term set used in the above example. An expert provides an HIFLTE* {*<sup>s</sup>*1,*s*2,*s*3} , {*<sup>s</sup>*3,*s*4} *on the comfortable degree due to his/her hesitation for this evaluation. However, he/she is more confident in the linguistic term s*2 *for the membership degree set and the linguistic term s*4 *for the non-membership degree set. The HIFLTS fails to express his/her confidence. Therefore, we utilize the PHILTS to present his/her evaluations. In this case, his/her evaluations can be expressed as A* (*p*) = {*<sup>s</sup>*1 (0.2),*s*<sup>2</sup> (0.6),*s*<sup>3</sup> (0.2)} , {*<sup>s</sup>*3 (0.2),*s*<sup>4</sup> (0.8)}*.*

In the following, the ordered PHILTE is defined to make sure that the operational results among PHILTEs can be determined easily.

**Definition 11.** *A PHILTE A* (*p*) = *l* (*p*), *l p is known to be an ordered PHILTE, if l* (*p*) *and l p are ordered PLTSs.*

**Example 4.** *Consider a PHILTE A* (*p*) = {*<sup>s</sup>*1 (0.4),*s*<sup>2</sup> (0.1),*s*<sup>3</sup> (0.35)} , {*<sup>s</sup>*3 (0.3),*s*<sup>4</sup> (0.4)} *used in the Example 2. Then, according to Definition 11 the ordered PHILTE is A* (*p*) = {*<sup>s</sup>*3 (0.35),*s*<sup>1</sup> (0.4),*s*<sup>2</sup> (0.1)} , {*<sup>s</sup>*4 (0.4),*s*<sup>3</sup> (0.3)}

#### *3.1. The Normalization of PHILTEs*

Ideally, the sum of the probabilities is one, but in PHILTE if either of the membership probabilities or non-membership probabilities have sum less than one than this issue is resolved as follows.

**Definition 12.** *Consider a PHILTE A* (*p*) = *l* (*p*), *l p , the associated PHILTE A* (*p*) = *l* (*p* ), *l p is defined, where*

$$\mathcal{U}\left(p^{\circ}\right) = \left\{ l^{(i)}\left(p^{\circ^{(i)}}\right) \, | \, i = 1, 2, \dots, \#l\left(p\right) \right\}; p^{\circ^{(i)}} = \frac{p^{\circ^{(i)}}}{\underset{\dagger}{\text{\prime}l}\limits\_{j=1}^{\mathcal{U}(p)}}, \forall i = 1, 2, \dots, \#l\left(p\right) \tag{5}$$

*and*

$$\mathbf{p}'\left(\mathbf{p}'\right) = \left\{ \mathbf{l}'^{(i)}\left(\left(\mathbf{p}'^{(i)}\right)\right) \,|\, \mathbf{j} = 1, 2, \dots, \mathbf{l}'\left(\mathbf{p}'\right) \right\}; \mathbf{p}'^{(i)} = \frac{\mathbf{p}'^{(i)}}{\mathbf{l}'\left(\mathbf{p}'\right)}, \forall j = 1, 2, \dots, \mathbf{l}'\left(\mathbf{p}'\right). \tag{6}$$

**Example 5.** *Consider a PHILTE A* (*p*) = {*<sup>s</sup>*1 (0.4),*s*<sup>2</sup> (0.1),*s*<sup>3</sup> (0.35)} , {*<sup>s</sup>*3 (0.3),*s*<sup>4</sup> (0.4)}*. Here, we see that* #*l*(*p*) ∑ *i*=1 *p*(*i*) = 0.85 < 1 *also* #*l p* ∑ *j*=1 *p* (*j*) = 0.7 < 1 *so the associated PHILTE A* (*p*) = *l* (*p* ), *l p* = *<sup>s</sup>*1 0.4 0.85 ,*s*2 0.1 0.85 ,*s*3 - 0.35 0.85 . , *s*3 - 0.3 0.7 . ,*s*4 0.4 0.7 .

In decision making process, experts usually face such problems in which the length of PHILTEs is different. Let *A* (*p*) = *l* (*p*), *l p* and *A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 be two PHILTEs of different lengths. Then, the following three cases are possible (*I*) #*l* (*p*) = #*l*1 (*p*1),(*<sup>I</sup> I*) #*l p* = #*l* 1 *p* 1 , (*III*) #*l* (*p*) = #*l*1 (*p*1) and #*l p* = #*l* 1 *p* 1. In such situation, they need to equalize their lengths by increasing the number of probabilistic linguistic terms in that PLTS in which the number of probabilistic linguistic terms are relatively small because PHILTEs of different lengths create grea<sup>t</sup> problems in operations, aggregation operators and finding the deviation degree between two PHILTEs.

**Definition 13.** *Given any two PHILTEs A* (*p*) = *l* (*p*), *l p and A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 *if* #*l* (*p*) > #*l*1 (*p*1) *then* #*l* (*p*) − #*l*1 (*p*1) *linguistic terms should be added to l*1 (*p*1) *to make their cardinalities identical. The added linguistic terms are the smallest one(s) in l*1 (*p*1)*, and the probabilities of all the linguistic terms are zero.*

The remaining cases are analogous to Case (*I*).

Let *A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1and *A*2 (*p*2) = *l*2 (*p*2), *l* 2 *p* 2be two PHILTEs. Then, the following two simple steps are involved in normalization process.

Step 1: If #*lj*(*pj*) ∑ *i*=1 *p*(*i*) *j* < 1 or #*l j p j*∑ *i*=1 *p* (*i*) *j* < 1 ; *j* = 1, 2, then we calculate *lj p j* , *l j p j* ; *j* = 1, 2 using Equations (5) and (6).

Step 2: If #*l*1 (*p*1) = #*l*2 (*p*2) or #*l* 1 *p* 1 = #*l* 2 *p* 2, then we add some elements according to Definition 13 to the one with small number of elements.

The resultant PHILTEs are called the normalized PHILTEs which are denoted as *A* (*p*) and *A* Q 1 (*p*1).

Note, for the convenience of presentation, we denote the normalized PHILTEs by *A* (*p*) and *A*1 (*p*1) as well.

*l*

**Example 6.** *Let A* (*p*) = {*<sup>s</sup>*2 (0.3),*s*<sup>3</sup> (0.7)} , {*<sup>s</sup>*0 (0.2),*s*<sup>1</sup> (0.4),*s*<sup>2</sup> (0.3)} *and A*1 (*p*1) = {*<sup>s</sup>*3 (0.4),*s*<sup>4</sup> (0.3),*s*<sup>5</sup> (0.3)} , {*<sup>s</sup>*1 (0.4),*s*<sup>2</sup> (0.6)} *then* 

*Step 1: According to Equation* (6) *l p* = *s*0 - 0.2 0.9 . ,*s*1 0.4 0.9 ,*s*2 - 0.3 0.9 .

*Step 2: Since* #*l* (*p*) < #*l*1 (*p*1)*, so we add the linguistic term s*2 *to l* (*p*) *so that the number of linguistic terms in l* (*p*) *and l*1 (*p*1) *becomes equal, thus l* (*p*) = {*<sup>s</sup>*2 (0.3),*s*<sup>3</sup> (0.7),*s*<sup>2</sup> (0)}*. In addition,* #*l* 1 *p* 1 < #*l p so we add the linguistic term s*1 *to l* 1 *p* 1 *, l* 1 *p* 1 = {*<sup>s</sup>*1 (0.4),*s*<sup>2</sup> (0.6),*s*<sup>1</sup> (0)}*. Therefore, after normalization, we have*

*A* (*p*) = {*<sup>s</sup>*2 (0.3),*s*<sup>3</sup> (0.7),*s*<sup>2</sup> (0)} , {*<sup>s</sup>*0 (0.2),*s*<sup>1</sup> (0.4),*s*<sup>2</sup> (0.3)} *and A*1 (*p*1) = {*<sup>s</sup>*3 (0.4),*s*<sup>4</sup> (0.3),*s*<sup>5</sup> (0.3)} , {*<sup>s</sup>*1 (0.4),*s*<sup>2</sup> (0.6),*s*<sup>1</sup> (0)}.

#### *3.2. The Comparison between PHILTEs*

In this section, the comparison between two PHILTEs is presented. For this purpose, the score function and the deviation degree of the PHILTE are defined.

**Definition 14.** *Let A* (*p*) = *l* (*p*), *l p* = *l* (*i*) *p*(*i*) , *l* (*j*) 5*p* (*j*) 6; *i* = 1, 2, ... , #*l* (*p*), *j* = 1, 2, ... , *l p be a PHILTE with a linguistic term set S* = {*<sup>s</sup>α*; *α* = 0, 1, 2, . . . , *g*} *such that r*(*i*) *and r* (*j*) *denote, respectively, the lower indices of linguistic terms l* (*i*) *and l* (*j*) *, then the score of A* (*p*) *is denoted and defined as follows:*

$$E\left(A\left(p\right)\right) = s\_7$$

$$\mathfrak{sl}'\left(p'\right)\_{\mathfrak{sl}\left(i\right)\_{\mathfrak{sl}\left(i\right)\_{\mathfrak{sl}\left(i\right)\_{\mathfrak{sl}}}}} \tag{7}$$

$$where \,\,\overline{\gamma} = \frac{\mathfrak{g} + \mathfrak{a} - \mathfrak{f}}{2} \,\, ; \, \mathfrak{a} = \frac{\sum\_{i=1}^{\mathfrak{gl}(p)} r^{(i)} p^{(i)}}{\sum\_{i=1}^{\mathfrak{gl}(p)} p^{(i)}} \,\, \text{and} \,\, \mathfrak{f} = \frac{\sum\_{j=1}^{\mathfrak{gl}'(p)} r^{(j)} p^{(j)}}{\sum\_{j=1}^{\mathfrak{gl}'(p')} p^{(j)}} \,\, .$$

*It is easy to see that* 0 ≤ *g*+*α*−*β* 2 ≤ *g which means <sup>s</sup>γ* ∈ *S* = {*<sup>s</sup>α*|*<sup>α</sup>* ∈ [0, *g*]} .

Apparently, the score function represents the averaging linguistic term of PHILTE.

For two PHILTEs *A* (*p*) and *A*1 (*p*1), if *E* (*A* (*p*)) > *E* (*<sup>A</sup>*1 (*p*1)) , then *A* (*p*) is superior to *A*1 (*p*1), denoted as *A* (*p*) > *A*1 (*p*1); if *E* (*A* (*p*)) < *E* (*<sup>A</sup>*1 (*p*1)), then *E* (*A* (*p*)) is inferior to *A*1 (*p*1), denoted as *A* (*p*) < *A*1 (*p*1); and, if *E* (*A* (*p*)) = *E* (*<sup>A</sup>*1 (*p*1)), then we cannot distinguish between them. Thus, in this case, we define another indicator, named as the deviation degree as follows:

**Definition 15.** *Let A* (*p*) = *l* (*p*), *l p* = *l* (*i*) *p*(*i*) , *l* (*j*) 5 *p* (*j*) 6 ; *i* = 1, 2, ... #*l* (*p*), *j* = 1, 2, ... , *l p be a PHILTE such that r*(*i*) *and r* (*j*) *denote, respectively, the lower indices of linguistic terms* (*j*)

(*i*) *and l , then the deviation degree of A* (*p*) *is denoted and defined as follows:*

$$\sigma\left(A\left(p\right)\right) = \left(\frac{\frac{\mathfrak{sl}\left(p\right)}{\sum\limits\_{i=1}^{\mathfrak{sl}\left(p\right)} \left(\mathfrak{r}^{\left(i\right)} - \overline{\mathfrak{r}}\right)^{2}} + \frac{\frac{\mathfrak{sl}\left(p\right)'}{\sum\limits\_{j=1}^{\mathfrak{sl}\left(p\right)} \left(\mathfrak{r}^{\left(j\right)} - \overline{\mathfrak{r}}\right) \left(\mathfrak{r}^{\left(j\right)} - \overline{\mathfrak{r}}\right)}{\mathfrak{sl}\left(p\right)'}}{\sum\limits\_{j=1}^{\mathfrak{sl}\left(p\right)'} p^{\binom{j}{j}}}\right)^{2}\right)^{\frac{1}{2}}\tag{8}$$

The deviation degree shows the distance from the average value in the PHILTE. The greater value of *σ* implies lower consistency while the lesser value of *σ* indicates higher consistency.

Thus, *A* (*p*) and *A*1 (*p*1) can be ranked by the following procedure:

	- (*a*) *σ* (*A* (*p*)) > *σ* (*<sup>A</sup>*1 (*p*1)), then *A* (*p*) < *A*1 (*p*1);
	- (*b*) *σ* (*A* (*p*)) < *σ* (*<sup>A</sup>*1 (*p*1)), then *A* (*p*) > *A*1 (*p*1);
	- (*c*) *σ* (*A* (*p*)) = *σ* (*<sup>A</sup>*1 (*p*1)), then *A* (*p*) is indifferent to *A*1 (*p*1) and is denoted as *A* (*p*) ∼ *A*1 (*p*1).

**Example 7.** *Let A* (*p*) = *l* (*p*), *l p* = {*<sup>s</sup>*1 (0.12),*s*<sup>2</sup> (0.26),*s*<sup>3</sup> (0.62)} , {*<sup>s</sup>*2 (0.1),*s*<sup>3</sup> (0.3),*s*<sup>4</sup> (0.6)} *, A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 = {*<sup>s</sup>*2 (0.3),*s*<sup>3</sup> (0.3)} , {*<sup>s</sup>*3 (0.35),*s*<sup>4</sup> (0.35)} *and S be the linguistic term set used in Example 2 then*

$$\begin{aligned} a &= \frac{1 \times 0.12 + 2 \times 0.26 + 3 \times 0.62}{0.12 + 0.26 + 0.62} = 2.5, \beta = \frac{2 \times 0.1 + 3 \times 0.3 + 4 \times 0.6}{0.6 + 0.3 + 0.1} = 3.5, \\ \overline{\gamma} &= \frac{6 + 2.5 - 3.5}{2} = 2.5, E\left(A\left(p\right)\right) = s\_{2.5} \\ a\_1 &= \frac{2 \times 0.3 + 3 \times 0.3}{0.3 + 0.3} = 2.5, \beta\_1 = \frac{0.35 \times 3 + 0.35 \times 4}{0.35 + 0.35} = 3.5, \\ \overline{\gamma}\_1 &= \frac{6 + 2.5 - 3.5}{2} = 2.5, E\left(A\_1\left(P\_1\right)\right) = s\_{2.5} \\ \text{Since } E\left(A\left(n\right)\right) &= E\left(A\_1\left(n\right)\right) \text{ and make to calculate the division down of } A\left(n\right) \text{ and } A\left(n\right) \end{aligned}$$

*Since E* (*A* (*p*)) = *E* (*<sup>A</sup>*1 (*p*1))*, we have to calculate the deviation degree of A* (*p*) *and A*1 (*p*1).

$$\sigma\left(A\left(p\right)\right) = \sqrt{\frac{\left(\left(0.12(1-2.5)\right)^2 + \left(0.26(2-2.5)\right)^2 + \left(0.62(3-2.5)\right)^2\right)}{0.12 + 0.26 + 0.62} + \frac{\left(\left(0.6(4-3.5)\right)^2 + \left(0.3(3-3.5)\right)^2 + \left(0.1(2-3.5)\right)^2\right)}{0.6 + 0.3 + 0.1}}} = 3.99$$

0.529*,*

$$
\sigma\left(A\_1\left(p\_1\right)\right) = \sqrt{\frac{\left(\left(0.3(2-2.5)\right)^2 + \left(0.3(3-2.5)\right)^2\right)}{0.3 + 0.3} + \frac{\left(\left(0.35(3-3.5)\right)^2 + \left(0.35(4-3.5)\right)^2\right)}{0.35 + 0.35}} = 0.37
$$
  $\text{Thus, } \sigma\left(A\left(p\right)\right) > \sigma\left(A\_1\left(p\_1\right)\right) \text{ so } A\left(p\right) \text{ is inferior to } A\_1\left(p\_1\right).$ 

In the following, we present a theorem which shows that the association does not affect the score and deviation degree of PHILTE.

**Theorem 1.** *Let A* (*p*) = *l* (*p*), *l p be a PHILTE and A* (*p*) = *l* (*p* ), *l p be the associated PHILTE then E* (*A* (*p*)) = *E* (*<sup>A</sup>* (*p*)) *and σ* (*A* (*p*)) = *σ* (*<sup>A</sup>* (*p*))*.*

**Proof.** *E* (*<sup>A</sup>* (*p*)) = *s γ* where *γ* = *g*+ *α*− *β* 2 and *α* = #*<sup>l</sup>*(*p* ) ∑ *i*=1 *r*(*i*) *p* (*i*) #*<sup>l</sup>*(*p* ) ∑ *i*=1 *<sup>p</sup>* (*i*) . Since #*l*(*p* ) ∑ *i*=1 *p* (*i*) = 1 and *p* (*i*) = *p*(*i*) #*l*(*p*) ∑ *i*=1 *p*(*i*) , which implies that *α* = #*l*(*p*) ∑ *i*=1 *r*(*i*) *p*(*i*) #*l*(*p*) ∑ *i*=1 *p*(*i*) = *α* and *β* = #*l* 5*p* 6 ∑ *j*=1 *r*(*j*) *p* (*j*) #*l* (*p* .) ∑ *j*=1 *p* (*j*) . Since #*l p* ∑ *j*=1 *p* (*j*) = 1 and *p* .(*j*) = *p* (*j*) #*l* (*p* ) ∑ *j*=1 *p* (*j*) which further implies that *β* = #*l* 5*p* 6 ∑ *j*=1 *r*(*i*) *p* (*i*) #*l* (*p* ) ∑ *i*=1 *p* (*i*) = *β*. Hence, *E* (*<sup>A</sup>* (*p*)) = *E* (*A* (*p*)).

Next, *σ* (*<sup>A</sup>* (*p*)) = ⎛⎜⎜⎜⎜⎝#*l*(*p* ) ∑*i*=1 5*<sup>p</sup>* (*i*)5*r*(*i*)− *γ*662 #*l*(*p*.) ∑*i*=1 *<sup>p</sup>*.(*i*) + #*l* 5*p* 6 ∑*j*=1 5*p* (*j*)5*r* (*j*) − *γ*662 #*l* (*p* ) ∑*j*=1 *p* (*j*) ⎞⎟⎟⎟⎟⎠ 1 2 Since #*l*(*p* ) ∑ *i*=1 *p* (*i*) = 1, *p* (*i*) = *p*(*i*) #*l*(*p*) ∑ *i*=1 *p*(*i*) , #*l p* ∑ *j*=1 *p* (*j*) = 1, *p* (*j*) = *p* (*j*) #*l* (*p* ) ∑ *j*=1 *p* (*j*) and *γ* = *γ*. It yields that *σ* (*<sup>A</sup>* (*p*)) = ⎛⎜⎜⎜⎜⎝#*l*(*p*) ∑*i*=1 (*p*(*i*)(*r*(*i*)−*<sup>γ</sup>*))<sup>2</sup> #*l*(*p*) ∑*i*=1 *p*(*i*) + #*l* 5*p* 6 ∑*j*=1 5*p* (*j*) 5*r* (*j*) −*<sup>γ</sup>*66<sup>2</sup> #*l* (*p* ) ∑*j*=1 *p* (*j*) ⎞⎟⎟⎟⎟⎠ 1 2 = *σ* (*A* (*p*)).

The following theorem shows that order of comparison between two PHILTEs remains unaltered after normalization.

**Theorem 2.** *Let A* (*p*) = *l* (*p*), *l p and A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1*be any two PHILTEs, A* (*p*) = *l* (*p*), *l p and A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 *be the corresponding normalized PHILTEs respectively, then A* (*p*) < *A*1 (*p*1) ⇐⇒ *A* (*p*) < *A* 1 (*p*1).

**Proof.** The proof is quite clear because, according to Theorem 1, *E* (*A* (*p*)) = *E* (*<sup>A</sup>* (*p*)) and *σ* (*A* (*p*)) = *σ* (*<sup>A</sup>* (*p*)), so order of comparison in Step (1) of normalization process is preserved and so for Step (2) is concerned in that step we add some elements to PHILTEs though it does not change the order as we attach zero probabilities with the corresponding added elements so this means *E A* (*p*) = *E A*1 (*p*1) and *σ A* (*p*) = *σ A*1 (*p*1) . Hence, the result holds true.

In the following definition, we summarize the fact that comparison of any two PHILTEs can be done by their corresponding normalized PHILTEs.

**Definition 16.** *Let A* (*p*) = *l* (*p*), *l p and A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 *be any two PHILTEs, A* (*p*) = *l* (*p*), *l p and A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 *be the corresponding normalized PHILTEs, respectively, then*

(*I*) If *E A* (*p*)> *E A*Q1 (*p*1)then *A* (*p*) > *A*1 (*p*1).


**Example 8.** *Let S be the linguistic term set used in Example 2, A* (*p*) = *l* (*p*), *l p* = {*<sup>s</sup>*1 (0.12),*s*<sup>2</sup> (0.26),*s*<sup>3</sup> (0.62)} , {*<sup>s</sup>*2 (0.1),*s*<sup>3</sup> (0.3),*s*<sup>4</sup> (0.5)} *and A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 =

$$\begin{array}{lcl} \langle \{s\_2(0.3), s\_3(0.3)\}, \{s\_3(0.35), s\_4(0.35)\}\rangle & \text{then the corresponding normalized PHILTES are} \\ \widetilde{A}\begin{pmatrix} p \\ \end{pmatrix} = \langle \widetilde{I}\begin{pmatrix} p \\ \end{pmatrix}, \widetilde{I}\begin{pmatrix} p' \\ \end{pmatrix} \rangle = \langle \{s\_1(0.12), s\_2(0.26), s\_3(0.62)\}, \{s\_3(0.375), s\_4(0.625), s\_3(0)\}\rangle & \text{and} \\ \widetilde{A}\_1\begin{pmatrix} p\_1 \\ \end{pmatrix} = \langle \widetilde{I}\_1\begin{pmatrix} p\_1 \\ \end{pmatrix}, \widetilde{I}\_1\begin{pmatrix} p' \\ p' \\ \end{pmatrix} \rangle = \langle \{s\_2(.5), s\_3(0.5), s\_2(0)\}, \{s\_3(0.5), s\_4(0.5), s\_3(0)\} \rangle. & \end{array}$$

*We calculate the score of these normalized PHILTEs*

$$\begin{aligned} a &= \frac{1 \times 0.12 + 2 \times 0.26 + 3 \times 0.62}{0.12 + 0.26 + 0.62} = 2.5, \beta = \frac{3 \times 0.375 + 4 \times 0.625 + 3 \times 0}{0.375 + 0.625 + 0} = 3.625, \\ \overline{\gamma} &= \frac{6 + 2.5 - 3.625}{2} = 2.437, \operatorname{E}\left(\tilde{A}\,(P)\right) = s\_{2.437} \\ a\_1 &= \frac{2 \times 0.5 + 3 \times 0.5 + 0 \times 2}{0.5 + 0.5} = 2.5, \beta\_1 = \frac{0.5 \times 3 + 0.5 \times 4 + 0 \times 3}{0.5 + 0.5} = 3.5, \\ \overline{\gamma}\_1 &= \frac{6 + 2.5 - 3.5}{2} = 2.5, \operatorname{E}\left(\overline{A}\_1\,(p\_1)\right) = s\_{2.5} \\ \operatorname{Since} &\operatorname{E}\left(\tilde{A}\,(p)\right) < \operatorname{E}\left(\overline{A}\_1\,(p\_1)\right) \text{ so } \operatorname{A}\left(p\right) < A\_1\left(p\_1\right). \end{aligned}$$

#### *3.3. Basic Operations of PHILTEs*

Based on the operational laws of the PLTSs [19], we develop some basic operational framework of PHILTEs and investigate their properties in preparation for applications to the practical real life problems. Hereafter, it is assumed that all PHILTEs are normalized.

**Definition 17.** *Let A* (*p*) = *l* (*p*), *l p* = *l*(*i*) *p*(*i*) , *l* (*j*) 5*p* (*j*) 6 ; *i* = 1, 2, ... , #*l* (*p*), *j* = 1, 2, ... , #*l p and A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 = *l*(*i*) 1 *p*(*i*) 1 , *l* (*j*) 1 5*p* (*j*) 1 6 ; *i* = 1, 2, . . . , #*l*1(*p*1), *j* = 1, 2, . . . , #*l* 1*p* 1 *be two normalized and ordered PHILTEs, then*

 *Addition:*

$$\begin{split} \mathcal{A}\left(p\right) \oplus A\_{1}\left(p\_{1}\right) &= \left\langle I\left(p\right) \oplus l\_{1}\left(p\_{1}\right), l^{\prime}\left(p^{\prime}\right) \oplus l\_{1}^{\prime}\left(p\_{1}^{\prime}\right) \right\rangle \\ &= \left\langle \cup\_{I^{(i)} \in I(p), l\_{1}^{(i)} \in l\_{1}(p\_{1})} \left\{ p^{(i)}I^{(i)} \oplus p\_{1}^{(i)}I\_{1}^{(i)} \right\}, \iota^{\prime}l\_{1}^{(i)} \in l^{(i)}(p^{\prime}), l\_{1}^{(i)} \in l\_{1}^{(i)}\left(p\_{1}^{(i)}\right) \left\{ p^{(i)} \quad I^{(i)} \oplus p\_{1}^{(i)}I\_{1}^{(i)} \right\} \right\} \end{split} \tag{9}$$

*Multiplication:*

$$\begin{split} A\left(p\right) \otimes A\_{1}\left(p\_{1}\right) &= \left\langle l\left(p\right) \otimes l\_{1}\left(p\_{1}\right), l'\left(p'\right) \otimes l\_{1}'\left(p'\_{1}\right) \right\rangle \\ &= \left\langle \cup\_{l^{(l)} \in \{(p), l\_{1}^{(l)} \in \mathcal{I}\_{1}\left(p\_{1}\right)\}} \left\{ \left(l^{(l)}\right)^{p^{(l)}} \otimes \left(l\_{1}^{(l)}\right)^{p\_{1}^{(l)}} \right\}, l^{\downarrow}\_{l}{}\_{l}{}^{(l)} \otimes \left(p'\right) J\_{1}^{\left(l\right)} \otimes l\_{1}'\left(p\_{1}\right) \left\{ \left(l^{(l)}\right)^{p^{(l)}} \otimes \left(l\_{1}^{(l)}\right)^{p\_{1}^{(l)}} \right\} \right\} \end{split} \tag{10}$$

*Scalar multiplication:*

$$\gamma\left(A\left(p\right)\right) = \left\langle \gamma l\left(p\right), \gamma l'\left(p'\right) \right\rangle = \left\langle \cup\_{l^{(l)} \in l(p)} \gamma p^{(l)} l^{(l)}{}\_{r} \cup\_{l'^{(l)} \in l'(p')} \gamma p^{(l)}{}\_{l} l'^{(l)} \right\rangle \tag{11}$$

*Scalar power:*

$$(A\ (p))^{\gamma} = \left\langle (\mathbf{l}\ (p))^{\gamma}, \left(\mathbf{l}'\left(p'\right)^{\gamma}\right) \right\rangle = \left\langle \cup\_{l^{(0)} \in \mathcal{l}\{p\}} \left(\mathbf{l}^{(l)}\right)^{\gamma p^{(l)}}, \cup\_{l^{(l)} \in \mathcal{l}'\{p'\}} \left(\mathbf{l}'^{(l)}\right)^{\gamma p^{(l)}} \right\rangle \tag{12}$$

*where l* (*i*) *and l* (*i*) 1 *are the ith linguistic terms in l* (*p*) *and l*1 (*p*1)*, respectively; l* (*j*) *and l* (*j*) 1 *are the jth linguistic terms in l p and l* 1 *p* 1 *, respectively; p*(*i*) *and p* (*i*) 1 *are the probabilities of the ith linguistic terms in l* (*p*) *and l*1 (*p*1)*, respectively; p* (*j*) *and p* (*j*) 1 *are the probabilities of the jth linguistic terms in l p and l* 1 *p* 1 *, respectively; and γ denote a nonnegative scalar.*

**Theorem 3.** *Let A* (*p*) = *l* (*p*), *l p* , *A*1 (*p*1) = *l*1 (*p*1), *l* 1 *p* 1 , *A*2 (*p*2) = *l*2 (*p*2), *l* 2 *p* 2 *be any three ordered and normalized PHILTEs, γ*1,*γ*2, *γ*3 ≥ 0*, then*

*(1) A* (*p*) ⊕ *A*1 (*p*1) = *A*1 (*p*1) ⊕ *A* (*p*); *(2) A* (*p*) ⊕ (*<sup>A</sup>*1 (*p*1) ⊕ *A*2 (*p*2)) = (*A* (*p*) ⊕ *A*1 (*p*1)) ⊕ *A*2 (*p*2); *(3) γ* (*A* (*p*) ⊕ *A*1 (*p*1)) = *γ A* (*p*) ⊕ *γ A*1 (*p*1); *(4)* (*<sup>γ</sup>*1 + *<sup>γ</sup>*2) *A* (*p*) = *γ*1*A* (*p*) ⊕ *γ*2*A* (*p*); *(5) A* (*p*) ⊗ *A*1 (*p*1) = *A*1 (*p*1) ⊗ *A* (*p*); *(6) A* (*p*) ⊗ (*<sup>A</sup>*1 (*p*1) ⊗ *A*2 (*p*2)) = (*A* (*p*) ⊗ *A*1 (*p*1)) ⊗ *A*2 (*p*2); *(7)* (*A* (*p*) ⊗ *A*1 (*p*1))*<sup>γ</sup>* = (*A* (*p*))*<sup>γ</sup>* ⊗ (*<sup>A</sup>*1 (*p*1))*<sup>γ</sup>* ; *(8)* (*A* (*p*))*<sup>γ</sup>*1+*γ*<sup>2</sup> = (*A* (*p*))*<sup>γ</sup>*<sup>1</sup> ⊗ (*A* (*p*))*<sup>γ</sup>*<sup>2</sup> . **Proof.** (1) *A* (*p*) ⊕ *A*1 (*p*1) = *l* (*p*), *l p* ⊕ *l*1 (*p*1), *l* 1 *p* 1 = *l* (*p*) ⊕ *l*1 (*p*1), *l p* ⊕ *l* 1 *p* 1 = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) *p*(*i*)*<sup>l</sup>* (*i*) ⊕ *p* (*i*) 1 *l* (*i*) 1 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* 1(*p*1) *p* (*j*) *l* (*j*) ⊕ *p* (*j*) 1 *l* (*j*) 1 = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) *p* (*i*) 1 *l* (*i*) 1 ⊕ *p*(*i*)*<sup>l</sup>* (*i*) , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* 1(*p*1) *p* (*j*) 1 *l* (*j*) 1 ⊕ *p* (*j*) *l* (*j*) = *l*1 (*p*1) ⊕ *l* (*p*), *l* 1 *p* 1 ⊕ *l p* = *l*1 (*p*1), *l* 1 *p* 1 ⊕ *l* (*p*), *l p* = *A*1 (*<sup>P</sup>*1) ⊕ *A* (*p*) (2) *A* (*p*) ⊕ (*<sup>A</sup>*1 (*p*1) ⊕ *A*2 (*p*2)) = *l* (*p*), *l p* ⊕ *l*1 (*p*1), *l* 1 *p* 1 ⊕ *l*2 (*p*2), *l* 2 *p* 2 = *l* (*p*) ⊕ (*l*1 (*p*1) ⊕ *l*2 (*p*2)), *l p* ⊕ *l* 1 *p* 1 ⊕ *l* 2 *p* 2 = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1),*l*(*i*)(*z*)∈*l*2(*p*2) *p*(*i*)*<sup>l</sup>* (*i*) ⊕ *p* (*i*) 1 *l* (*i*) 1 ⊕ *p* (*i*) 2 *l* (*i*) 2 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* <sup>1</sup>(*p*1),*<sup>l</sup>* (*j*) (*z*)∈*l* 2(*p*2) *p* (*j*) *l* (*j*) ⊕ 5 *p* (*j*) 1 *l* (*j*) 1 ⊕ *p* (*j*) 2 *l* (*j*) 2 6 = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1),*l*(*i*)(*z*)∈*l*2(*p*2) *p*(*i*)*<sup>l</sup>* (*i*) ⊕ *p* (*i*) 1 *l* (*i*) 1 ⊕ *p* (*i*) 2 *l* (*i*) 2 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* <sup>1</sup>(*p*1),*<sup>l</sup>* (*j*) (*z*)∈*l* 2(*p*2) 5 *p* (*j*) *l* (*j*) ⊕ *p* (*j*) 1 *l* (*j*) 1 6 ⊕ *p* (*j*) 2 *l* (*j*) 2 = (*l* (*p*) ⊕ *l*1 (*p*1)) ⊕ *l*2 (*p*2), *l p* ⊕ *l* 1 *p* 1 ⊕ *l* 2 *p* 2 = *l* (*p*), *l p* ⊕ *l*1 (*p*1), *l* 1 *p* 1 ⊕ *l*2 (*p*2), *l* 2 *p* 2 = (*A* (*p*) ⊕ *A*1 (*p*1)) ⊕ *A*2 (*p*2) (3) *γ* (*A* (*p*) ⊕ *A*1 (*p*1)) = *γ l* (*p*), *l p* ⊕ *l*1 (*p*1), *l* 1 *p* 1 = *γ l* (*p*) ⊕ *l*1 (*p*1), *l p* ⊕ *l* 1 *p* 1 = *γ* <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) *p*(*i*)*<sup>l</sup>* (*i*) ⊕ *p* (*i*) 1 *l* (*i*) 1 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* 1(*p*1) *p* (*j*) *l* (*j*) ⊕ *p* (*j*) 1 *l* (*j*) 1 = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) *γp*(*i*)*<sup>l</sup>* (*i*) ⊕ *γp* (*i*) 1 *l* (*i*) 1 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* 1(*p*1) *γp* (*j*) *l* (*j*) ⊕ *γp* (*j*) 1 *l* (*j*) 1 = *γl* (*p*) ⊕ *γl*1 (*p*1), *γl p* ⊕ *γl* 1 *p* 1 = *γl* (*p*), *γl p* ⊕ *γl*1 (*p*1), *γl* 1 *p* 1 = *γ A* (*p*) ⊕ *γ A*1 (*p*1)

(4) ( *γ*1 + *<sup>γ</sup>*2) *A* (*p*) = (*<sup>γ</sup>*1 + *<sup>γ</sup>*2) *l* (*p*), *l p* = *γ* <sup>∪</sup>*l*(*i*)∈*l*(*p*) (*<sup>γ</sup>*1 + *<sup>γ</sup>*2) *p*(*i*)*<sup>l</sup>* (*i*) , ∪ *l* (*j*) ∈*l* (*p* ) (*<sup>γ</sup>*1 + *<sup>γ</sup>*2) *p* (*j*) *l* (*j*) = <sup>∪</sup>*l*(*i*)∈*l*(*p*) *γ*1 *p*(*i*)*<sup>l</sup>* (*i*) ⊕ *γ*2 *p*(*i*)*<sup>l</sup>* (*i*) , ∪ *l* (*j*) ∈*l* (*p* ) *γ*1 *p* (*j*) *l* (*j*) ⊕ *γ*2 *p* (*j*) *l* (*j*) = <sup>∪</sup>*l*(*i*)∈*l*(*p*) *γ*1 *p*(*i*)*<sup>l</sup>* (*i*) ⊕ <sup>∪</sup>*l*(*i*)∈*l*(*p*) *γ*2 *p*(*i*)*<sup>l</sup>* (*i*) , ∪ *l* (*j*) ∈*l* (*p* ) *γ*1 *p* (*j*) *l* (*j*) ⊕ ∪*l* (*j*) ∈*l* (*p* ) *γ*2 *p* (*j*) *l* (*j*) = *γ*1*l* (*p*) ⊕ *γ*2*l* (*p*), *γ*1*l p* ⊕ *γ*2*l p* = *γ*1*l* (*p*), *γ*1*l p* ⊕ *γ*2*l* (*p*), *γ*2*l p* = *γ*1*A* (*p*) ⊕ *γ*2*A* (*P*) (5) *A* (*p*) ⊗ *A*1 (*p*1) = *l* (*p*), *l p* ⊗ *l*1 (*p*1), *l* 1 *p* 1 = *l* (*p*) ⊗ *l*1 (*p*1), *l p* ⊗ *l* 1 *p* 1 = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) 3 *l* (*i*) *p*(*i*) ⊗ *l* (*i*) 1 *p* (*i*) 1 4 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* 1(*p*1) ⎧ ⎪⎨ ⎪⎩ 5 *l* (*j*) 6*p* (*j*) ⊗ 5 *l* (*j*) 1 6*p* (*j*) 1 ⎫ ⎪⎬ ⎪⎭ = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) 3 *l* (*i*) 1 *p* (*i*) 1 ⊗ *l* (*i*) *p*(*i*) 4 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* 1(*p*1) ⎧ ⎪⎨ ⎪⎩ 5 *l* (*j*) 1 6*p* (*j*) 1 ⊗ 5 *l* (*j*) 6*p* (*j*) ⎫ ⎪⎬ ⎪⎭ = *l* (*y*) (*p* (*y*)) ⊗ *l* (*p*), *l* (*y*) *p* (*y*) ⊗ *l p* = *l*1 (*p*1), *l* 1 *p* 1 ⊗ *l* (*p*), *l p* = *A*1 (*<sup>P</sup>*1) ⊗ *A* (*p*) (6) *A* (*p*) ⊗ (*<sup>A</sup>*1 (*p*1) ⊗ *A*2 (*p*2)) = *l* (*p*), *l p* ⊗ *l*1 (*p*1), *l* 1 *p* 1 ⊗ *l*2 (*p*2), *l* 2 *p* 2 = *l* (*p*) ⊗ (*l*1 (*p*1) ⊗ *l*2 (*p*2)), *l p* ⊗ *l* 1 *p* 1 ⊗ *l* 2 *p* 2 = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1),*l* (*i*) 2 <sup>∈</sup>*l*2(*p*2) 3 *l* (*i*) *p*(*i*) ⊗ *l* (*i*) 1 *p* (*i*) 1 ⊗ *l* (*i*) 2 *p* (*i*) 2 4 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* <sup>1</sup>(*p*1),*<sup>l</sup>* (*j*) 2 ∈*l* 2(*p*2) ⎧ ⎪⎨ ⎪⎩ 5 *l* (*j*) 6*p* (*j*) ⊗ ⎛ ⎜⎝ 5 *l* (*j*) 1 6*p* (*j*) 1 ⊗ 5 *l* (*j*) 2 6*p* (*j*) 2 ⎞ ⎟⎠ ⎫ ⎪⎬ ⎪⎭ = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1),*l* (*i*) 2 <sup>∈</sup>*l*2(*p*2) 3 *l* (*i*) *p*(*i*) ⊗ *l* (*i*) 1 *p* (*i*) 1 ⊗ *l* (*i*) 2 *p* (*i*) 2 4 , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* <sup>1</sup>(*p*1),*<sup>l</sup>* (*j*) 2 ∈*l* 2(*p*2) ⎧ ⎪⎨ ⎪⎩ ⎛ ⎜⎝ 5 *l* (*j*) 6*p* (*j*) ⊗ 5 *l* (*j*) 1 6*p* (*j*) 1 ⎞ ⎟⎠ ⊗ 5 *l* (*j*) 2 6*p* (*j*) 2 ⎫ ⎪⎬ ⎪⎭ = (*l* (*p*) ⊗ *l*1 (*p*1)) ⊗ *l*2 (*p*2), *l p* ⊗ *l* 1 *p* 1 ⊗ *l* 2 *p* 2 = *l* (*p*), *l p* ⊗ *l*1 (*p*1), *l* 1 *p* 1 ⊗ *l*2 (*p*2), *l* 2 *p* 2 = (*A* (*p*) ⊗ *A*1 (*p*1)) ⊗ *A*2 (*p*2) (7) ( *A* (*p*) ⊗ *A*1 (*p*1))*<sup>γ</sup>* = *l* (*p*), *l p* ⊗ *l*1 (*p*1), *l* 1 *p* 1*<sup>γ</sup>* = (*l* (*p*) ⊗ *l*1 (*p*1))*<sup>γ</sup>* , *l p* ⊗ *l* 1 *p* 1*<sup>γ</sup>* = 5 <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) *p*(*i*)*<sup>l</sup>* (*i*) ⊗ *p* (*i*) 1 *l* (*i*) 1 6*γ* , ∪ *l* (*j*) ∈*l* (*p* ),*l* (*j*) 1 ∈*l* 1(*p*1) *p* (*j*) *l* (*j*) ⊗ *p* (*j*) 1 *l* (*j*) 1 *γ* = <sup>∪</sup>*l*(*i*)∈*l*(*p*),*<sup>l</sup>* (*i*) 1 <sup>∈</sup>*l*1(*p*1) 3 *l* (*i*) *γp*(*i*) ⊗ *l* (*i*) 1 *γp* (*i*) 1 4 , ∪*l* (*j*) ∈*l* (*p* ),*l* (*j*) (*y*)∈*l* 1(*p*1) ⎧ ⎪⎨ ⎪⎩ 5 *l* (*j*) 6*γp* (*j*) ⊗ 5 *l* (*j*) 1 6*γp* (*j*) 1 ⎫ ⎪⎬ ⎪⎭ = (*l* (*p*))*<sup>γ</sup>* ⊗ (*l*1 (*p*1))*<sup>γ</sup>* , *l p γ* ⊗ *l* 1 *p* 1*<sup>γ</sup>* = (*l* (*p*))*<sup>γ</sup>* , *l p γ* ⊗ (*l*1 (*p*1))*<sup>γ</sup>* , *l* 1 *p* 1*<sup>γ</sup>* = (*A* (*p*))*<sup>γ</sup>* ⊗ (*<sup>A</sup>*1 (*p*1))*<sup>γ</sup>* (8) ( *A* (*p*))*<sup>γ</sup>*1+*γ*<sup>2</sup> = *l* (*p*), *l p <sup>γ</sup>*1+*γ*2 = (*l* (*p*))*<sup>γ</sup>*1+*γ*<sup>2</sup> , *l p <sup>γ</sup>*1+*γ*<sup>2</sup> 

= <sup>∪</sup>*l*(*i*)∈*l*(*p*) 3*l*(*i*)(*<sup>γ</sup>*1+*γ*2)*p*(*i*)4 , ∪*l* (*j*) ∈*l* (*p* ) ⎧⎪⎨⎪⎩5*l* (*j*) (*x*)6(*<sup>γ</sup>*1+*γ*2)*<sup>p</sup>* (*j*) ⎫⎪⎬⎪⎭ = <sup>∪</sup>*l*(*i*)∈*l*(*p*) 3*l*(*i*)*<sup>γ</sup>*<sup>1</sup> *p*(*i*) ⊗ *l*(*i*)*<sup>γ</sup>*<sup>2</sup> *p*(*i*)4 , ∪*l* (*j*) ∈*l* (*p* ) ⎧⎪⎨⎪⎩5*l* (*j*) 6*γ*1 *p* (*j*) ⊗ 5*l* (*j*) 6*γ*2 *p* (*j*) ⎫⎪⎬⎪⎭ = <sup>∪</sup>*l*(*i*)∈*l*(*p*) 3*l*(*i*)*<sup>γ</sup>*<sup>1</sup> *p*(*i*)4 ⊗ <sup>∪</sup>*l*(*i*)(*x*)∈*l*(*p*) 3*l*(*i*)*<sup>γ</sup>*<sup>2</sup> *p*(*i*)4 , ∪ *l* (*j*) ∈*l* (*p* ) ⎧⎪⎨⎪⎩5*l* (*j*) 6*γ*1 *p* (*j*) ⎫⎪⎬⎪⎭ ⊗ ∪*l* (*j*) ∈*l* (*p* ) ⎧⎪⎨⎪⎩5*l* (*j*) 6*γ*2 *p* (*j*) ⎫⎪⎬⎪⎭ = (*l* (*p*))*<sup>γ</sup>*<sup>1</sup> ⊗ (*l* (*p*))*<sup>γ</sup>*<sup>2</sup> , *l p <sup>γ</sup>*1 ⊗ *l p <sup>γ</sup>*2 = (*l* (*p*))*<sup>γ</sup>*<sup>1</sup> , *l p <sup>γ</sup>*1 ⊗ (*l* (*p*))*<sup>γ</sup>*<sup>2</sup> , *l p <sup>γ</sup>*2 = (*A* (*p*))*<sup>γ</sup>*<sup>1</sup> ⊗ (*A* (*p*))*<sup>γ</sup>*<sup>2</sup> .

#### **4. Aggregation Operators and Attribute Weights**

This section is dedicated to discussion on some basic aggregation operators of PHILTS. Deviation degree between two PHILTEs is also defined in this section. Finally, we calculate the attribute weights in the light of PHILTEs.

#### *4.1. The Aggregation Operators for PHILTEs*

The aggregation operators are powerful tools to deal with linguistic information. To make a better usage of PHILTEs in real world problems, in the following, aggregation operators for PHILTEs have been developed.

**Definition 18.** *Let Ak* (*pk*) = *lk* (*pk*), *l k p k* (*k* = 1, 2, . . . , *n*) *be n ordered and normalized PHILTEs. Then*

*PHILA* (*<sup>A</sup>*1 (*p*1), *A*2 (*p*2),..., *An* (*pn*)) = 1 *n l*1 (*p*1), *l* 1 *p* 1 ⊕ *l*2 (*p*2), *l* 2 *p* 2 ⊕ ... ⊕ *ln* (*pn*), *l n p n* = 1 *n l*1 (*p*1) ⊕ *l*2 (*p*2) ⊕ ... ⊕ *ln* (*pn*), *l* 1 *p* 1 ⊕ *l* 2 *p* 2 ⊕ ... ⊕ *l n p n* = 1 *n* <sup>∪</sup>*l*(*i*) 1 <sup>∈</sup>*l*1(*p*1),*l*(*i*) 2 <sup>∈</sup>*l*2(*p*2),...,*l*(*i*) *n* <sup>∈</sup>*ln*(*pn*) *p*(*i*) 1 *l*(*i*) 1 ⊕ *p*(*i*) 2 *l*(*i*) 2 ⊕ ... ⊕ *p*(*i*) *n l*(*i*) *n* , ∪*l* (*j*) 1 ∈*<sup>l</sup>* 1*<sup>p</sup>* <sup>1</sup>,*<sup>l</sup>* (*j*) 2 ∈*<sup>l</sup>* 2*<sup>p</sup>* <sup>2</sup>,...,*<sup>l</sup>* (*j*) *n* ∈*<sup>l</sup> n<sup>p</sup> n p* (*j*) 1 *l* (*j*) 1 ⊕ *p* (*j*) 2 *l* (*j*) 2 ⊕ ... ⊕ *p* (*j*) *n l* (*j*) *n* (13)

*is called the probabilistic hesitant intuitionistic linguistic averaging (PHILA) operator.*

**Definition 19.** *Let Ak* (*pk*) = *lk* (*pk*), *l k p k* (*k* = 1, 2, . . . , *n*) *be n ordered and normalized PHILTEs. Then*

*PH ILWA* (*<sup>A</sup>*1 (*p*1), *A*2 (*p*2),..., *An* (*pn*)) = *w*1 *l*1 (*p*1), *l* 1 *p* 1 ⊕ *w*2 *l*2 (*p*2), *l* 2 *p* 2 ⊕ ... ⊕ *wn ln* (*pn*), *l n p n* = *<sup>w</sup>*1*l*1 (*p*1) ⊕ *<sup>w</sup>*2*l*2 (*p*2) ⊕ ... ⊕ *wnln* (*pn*), *<sup>w</sup>*1*<sup>l</sup>* 1 *p* 1 ⊕ *<sup>w</sup>*2*<sup>l</sup>* 2 *p* 2 ⊕ ... ⊕ *wnl n p n* = <sup>∪</sup>*l*(*i*) 1 <sup>∈</sup>*l*1(*p*1) *w*1 *p*(*i*) 1 *l*(*i*) 1 ⊕ <sup>∪</sup>*l*(*i*) 2 <sup>∈</sup>*l*2(*p*2) *w*2 *p*(*i*) 2 *l*(*i*) 2 ⊕ ... ⊕ <sup>∪</sup>*l*(*i*) *n* <sup>∈</sup>*ln*(*pn*) *wn p*(*i*) *n l*(*i*) *n* , ∪*l* (*j*) 1 ∈*<sup>l</sup>* 1*<sup>p</sup>* 1 *w*1 *p* (*j*) 1 *l* (*j*) 1 ⊕ ∪*l* (*j*) 2 ∈*<sup>l</sup>* 2*<sup>p</sup>* 2 *w*2 *p* (*j*) 2 *l* (*j*) 2 ⊕ ... ⊕ ∪*l* (*j*) *n* ∈*<sup>l</sup> n<sup>p</sup> n wn p* (*j*) *n l* (*j*) *n* (14)

*is called the probabilistic hesitant intuitionistic linguistic weighted averaging (PHILWA) operator, where w* = (*<sup>w</sup>*1, *w*2,..., *wn*)*<sup>t</sup> is the weight vector of Ak* (*pk*) (*k* = 1, 2, . . . , *<sup>n</sup>*)*, wk* ≥ 0*, k* = 1, 2, . . . , *n, and n*∑ *k*=1 *wk* = 1*.*

Particularly, if we take *w* = 1*n* , 1*n* ,..., 1*nt*, then the PHILWA operator reduces to the PHILA operator.

**Definition 20.** *Let Ak* (*pk*) = *lk* (*pk*), *l k p k* (*k* = 1, 2, . . . , *n*) *be n ordered and normalized PHILTEs. Then,*

*PHILG* (*<sup>A</sup>*1 (*p*1), *A*2 (*p*2),..., *An* (*pn*)) = *l*1 (*p*1), *l* 1 *p* 1 ⊗ *l*2 (*p*2), *l* 2 *p* 2 ⊗ ... ⊗ *ln* (*pn*), *l n p n* 1 *n* = *l*1 (*p*1) ⊗ *l*2 (*p*2) ⊗ ... ⊗ *ln* (*pn*), *l* 1 *p* 1 ⊗ *l* 2 *p* 2 ⊗ ... ⊗ *l n p n* 1 *n* = ⎛⎜⎜⎜⎜⎜⎜⎝ <sup>∪</sup>*l*(*i*) 1 <sup>∈</sup>*l*1(*p*1),*l*(*i*) 2 <sup>∈</sup>*l*2(*p*2),...,*l*(*i*) *n* <sup>∈</sup>*ln*(*pn*) 3*l*(*i*) 1 *p*(*i*) 1 ⊗ *l*(*i*) 2 *p*(*i*) 2 ⊗ ... ⊗ *l*(*i*) *n p*(*i*) *n* 4 , ∪*l* (*j*) 1 ∈*<sup>l</sup>* 1*<sup>p</sup>* <sup>1</sup>,*<sup>l</sup>* (*j*) 2 ∈*<sup>l</sup>* 2*<sup>p</sup>* <sup>2</sup>,...,*<sup>l</sup>* (*j*) *n* ∈*<sup>l</sup> n<sup>p</sup> n* ⎧⎪⎨⎪⎩5*l* (*j*) 1 6*p* (*j*) 1 ⊗ 5*l* (*j*) 2 6*p* (*j*) 2 ⊗ ... ⊗ 5*l* (*j*) *n* 6*p* (*j*) *n* ⎫⎪⎬⎪⎭ ⎞⎟⎟⎟⎟⎟⎟⎠ 1 *n* (15)

*is called the probabilistic hesitant intuitionistic linguistic geometric (PHILG) operator.*

**Definition 21.** *Let Ak* (*pk*) = *lk* (*pk*), *l k p k* (*k* = 1, 2, . . . , *n*) *be n ordered and normalized PHILTEs. Then*

*PH ILWG* (*<sup>A</sup>*1 (*p*1), *A*2 (*p*2),..., *An* (*pn*)) = *l*1 (*p*1), *l* 1 *p* 1*<sup>w</sup>*<sup>1</sup> ⊗ *l*2 (*p*2), *l* 2 *p* 2*<sup>w</sup>*<sup>2</sup> ⊗ ... ⊗ *ln* (*pn*), *l n p nwn* = (*l*1 (*p*1))*<sup>w</sup>*<sup>1</sup> ⊗ (*l*2 (*p*2))*<sup>w</sup>*<sup>2</sup> ⊗ ... ⊗ (*ln* (*pn*))*wn* , *l* 1 *p* 1*<sup>w</sup>*<sup>1</sup> ⊗ *l* 2 *p* 2*<sup>w</sup>*<sup>2</sup> ⊗ ... ⊗ *l n p nwn* = <sup>∪</sup>*l*(*i*) 1 <sup>∈</sup>*l*1(*p*1) 3*l*(*i*) 1 *w*1 *p*(*i*) 1 4 ⊗ <sup>∪</sup>*l*(*i*) 2 <sup>∈</sup>*l*2(*p*2) 3*l*(*i*) 2 *w*2 *p*(*i*) 2 4 ⊗ ... ⊗ <sup>∪</sup>*l*(*i*) *n* <sup>∈</sup>*ln*(*pn*) 3*l*(*i*) *n wn p*(*i*) *n* 4 , ∪ *l* (*j*) 1 ∈*<sup>l</sup>* 1*<sup>p</sup>* 1 ⎧⎪⎨⎪⎩5*l* (*j*) 1 6*w*1 *p* (*j*) 1 ⎫⎪⎬⎪⎭ ⊗ ∪*l* (*j*) 2 ∈*<sup>l</sup>* 2*<sup>p</sup>* 2 ⎧⎪⎨⎪⎩5*l* (*j*) 2 6*w*2 *p* (*j*) 2 ⎫⎪⎬⎪⎭ ⊗ ... ⊗ ∪*l* (*j*) *n* ∈*<sup>l</sup> n<sup>p</sup> n* ⎧⎪⎨⎪⎩5*l* (*j*) *n* 6*wn p* (*j*) *n* ⎫⎪⎬⎪⎭ (16)

*is called the probabilistic hesitant intuitionistic linguistic weighted geometric (PHILWG) operator, where w* = (*<sup>w</sup>*1, *w*2,..., *wn*)*<sup>t</sup> is the weight vector of Ak* (*pk*) (*k* = 1, 2, . . . , *<sup>n</sup>*)*, wk* ≥ 0*, k* = 1, 2, . . . , *n, and n*∑ *k*=1 *wk* = 1*.*

Particularly, if we take *w* = 1*n* , 1*n* ,..., 1*nt*, then the PHILWG operator reduces to the PHILG operator.

#### *4.2. Maximizing Deviation Method for Calculating the Attribute Weights*

The choice of weights directly affects the performance of weighted aggregation operators. For this purpose, in this subsection, the affective maximizing deviation method is adopted to calculate weight in MAGDM when weights are unknown or partly known. Based on Definition 9, the deviation degree between two PHILTEs is defined as follows:

**Definition 22.** *Let A* (*p*) *and A*1 (*p*1) *be any two PHILTEs of equal length. Then, the deviation degree D between A* (*p*) *and A*1 (*p*1) *is given by*

$$D\left(A\left(p\right), A\_1\left(p\_1\right)\right) = d\left(l\left(p\right), l\_1\left(p\_1\right)\right) + d\left(l'\left(p'\right), l\_1'\left(p\_1'\right)\right) \tag{17}$$

*where*

$$d\left(l\left(p\right), l\_1\left(p\_1\right)\right) = \sqrt{\frac{\sum\_{i=1}^{\#l(p)} \left(p^{(i)}r^{(i)} - p\_1^{(i)}r\_1^{(i)}\right)}{\#l\left(p\right)}},\tag{18}$$

$$d\left(l'\left(p'\right), l\_1'\left(p\_1'\right)\right) = \sqrt{\frac{\mathbb{M}^{\prime\prime}\left(p^{\prime}\right)}{\sum\_{j=1}^{\prime}\left(p^{\prime(j)}r^{\prime(j)} - p\_1^{\prime(j)}r\_1^{\prime(j)}\right)}{\mathbb{M}^{\prime}\left(p^{\prime}\right)}}\tag{19}$$

*r*(*i*) *denote the lower index of the ith linguistic term of l* (*p*) *and r* (*j*) *denote the lower index of the jth linguistic term of l p .*

Based on the above definition, in the following, we derive attribute weight vector because working on the probabilistic linguistic data to deal with the MAGDM problems, in which the weight information of attribute values is completely unknown or partly known, we must find the attribute weights in advance.

Given the set of alternatives *x* = {*<sup>x</sup>*1, *x*2,..., *xm*} and the set of "*n*" attributes *c* = {*<sup>c</sup>*1, *c*2,..., *cn*}, respectively, then, by using Equation (17), the deviation measure between the alternative "*xi*" and all other alternatives with respect to the attribute "*cj*" can be given as:

$$D\_{\rm ij}(w) = \sum\_{q=1, q \neq i} w\_{\rm j} D\left(h\_{\rm ij}, h\_{\rm ij}\right), j = 1, 2, \dots, m, j = 1, 2, \dots, n \tag{20}$$

In accordance with the theme of the maximizing deviation method, if the deviation degree among alternatives is smaller for an attribute, then the attribute should give a smaller weight. This one shows that the alternatives are homologous to the attribute. Contrarily, it should give a larger weight. Let

$$\begin{split} D\_{\vec{j}}\left(w\right) &= \sum\_{i=1}^{m} D\_{\vec{i}\vec{j}}\left(w\right) = \sum\_{i=1}^{m} \sum\_{q \neq i}^{m} w\_{\vec{j}} D\left(h\_{\vec{i}\vec{j}}, h\_{\vec{q}\vec{j}}\right) \\ &= \sum\_{i=1}^{m} \sum\_{q \neq i}^{m} w\_{\vec{j}} \left( d\left(l\_{\vec{i}\vec{j}}\left(p\_{\vec{i}\vec{j}}\right), l\_{\vec{q}\vec{j}}\left(p\_{\vec{q}\vec{j}}\right)\right) + d\left(l\_{\vec{i}\vec{j}}'\left(p\_{\vec{i}\vec{j}}'\right), l\_{\vec{q}\vec{j}}'\left(p\_{\vec{q}\vec{j}}'\right)\right) \right) \end{split} \tag{21}$$

show the deviation degree of one alternative and others with respect to the attribute "*cj*" and let *D* (*w*) = *n* ∑ *j*=1 *Dj* (*w*) = *n* ∑ *j*=1 *m* ∑ *i*=1 *Dij* (*w*) = *n* ∑ *j*=1 *m* ∑ *i*=1 *m* ∑ *q* =*i wj<sup>D</sup>* - *hij*, *hqj* . = *n* ∑ *j*=1 *m* ∑ *i*=1 *m* ∑ *q* =*i wj d* - *lij* - *pij* . , *lqj* - *pqj*.. + *d l ij p ij* , *l qj p qj* = *n* ∑ *j*=1 *m* ∑ *i*=1 *m* ∑ *q* =*i wj* ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0112 1 #*lij*(*pij*) #*lij*(*pij*) ∑ *k*1=1 *p* (*k*1) *ij r* (*k*1) *ij* − *p* (*k*1) *qj r* (*k*1) *qj* 2 + 0112 1 #*l ij p ij* #*l ij p ij* ∑ *k*2=1 *p* (*k*2) *ij r* (*<sup>k</sup>*2) *ij* − *p* (*k*2) *qj r* (*<sup>k</sup>*2) *qj* 2 ⎞ ⎟⎟⎟⎟⎟⎟⎠ (22)

express the sum of the deviation degrees among all attributes.

To obtain the attribute weights vector *w* = (*<sup>w</sup>*1, *w*2,..., *wn*) *t*, we build the following single objective optimization model (named as *M*1) to drive the deviation degree *d* (*w*) as large as possible. .

$$M\_1 = \left\{ \begin{array}{ll} \max D \ (w) = \sum\limits\_{j=1}^{n} \sum\limits\_{i=1}^{m} w\_j D \ (h\_{ij}, h\_{qj}) \\\\ w\_j \ge 0, j = 1, 2, \dots, n, \sum\limits\_{j=1}^{n} w\_j^2 = 1 \\\\ \dots \quad \dots \quad \dots \quad \dots \end{array} \right. $$

To solve the above model *M*1, we use the Lagrange multiplier function:

$$L\left(w,\eta\right) = \sum\_{j=1}^{n} \sum\_{i=1}^{m} \sum\_{q \neq i}^{m} w\_{j} D\left(h\_{ij}, h\_{qj}\right) + \frac{\eta}{2} \left(\sum\_{j=1}^{n} w\_{j}^{2} - 1\right) \tag{23}$$

where *η* is the Lagrange parameter.

Then, we compute the partial derivatives of Lagrange function with respect to *wj* and *η* and let them be zero:

$$\begin{cases} \frac{\partial L(w,\eta)}{\partial w\_j} = \sum\_{i=1}^{m} \sum\_{\substack{\eta=i \\ \eta \neq i}}^{m} w\_j D\left(h\_{\vec{\eta}\cdot} h\_{\vec{\eta}\cdot}\right) + \eta w\_j = 0, j = 1, 2, \dots, n. \\\frac{\delta L(w,\eta)}{d\eta} = \sum\_{j=1}^{n} w\_j^2 - 1 = 0 \end{cases} \tag{24}$$

By solving Equation (24), one can obtain the optimal weight *w* = (*<sup>w</sup>*1, *w*2,..., *wn*) *t* .

*wj* = *m* ∑ *i*=1 *m* ∑ *q* =*i <sup>D</sup>*(*hij*,*hqj*) 0112 *n* ∑ *j*=1 *m* ∑ *i*=1 ∑ *q* =*i <sup>D</sup>*(*hij*,*hqj*) 2 = *m* ∑ *i*=1 *m* ∑ *q* =*i <sup>d</sup>*(*lij*(*pij*),*lqj*(*pqj*))+*<sup>d</sup> l ij p ij* ,*l qj p qj* 0112 *n* ∑ *j*=1 *m* ∑ *i*=1 ∑ *q* =*i <sup>d</sup>*(*lij*(*pij*),*lqj*(*pqj*))+*<sup>d</sup> l ij p ij* ,*l qj p qj* <sup>2</sup> *wj* = *m* ∑ *i*=1 *m* ∑ *q* =*i* ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0112 1 #*lij*(*pij*) #*lij*(*pij*) ∑ *k*1=1 *p* (*k*1) *ij r* (*k*1) *ij* − *p* (*k*1) *qj r* (*k*1) *qj* 2 + 0112 1 #*l ij p ij* #*l ij p ij* ∑ *k*2=1 *p* (*k*2) *ij r* (*<sup>k</sup>*2) *ij* − *p* (*k*2) *qj r* (*<sup>k</sup>*2) *qj* 2 ⎞ ⎟⎟⎟⎟⎟⎟⎠ 01111111112 *n* ∑ *j*=1 ⎛ ⎜⎜⎜⎜⎜⎜⎝ *m* ∑ *i*=1 ∑ *q* =*i* ⎛ ⎜⎜⎜⎜⎜⎜⎝ 0112 1 #*lij*(*pij*) #*lij*(*pij*) ∑ *k*1=1 *p* (*k*1) *ij r* (*k*1) *ij* − *p* (*k*1) *qj r* (*k*1) *qj* 2 + 0112 1 #*l ij p ij* #*l ij p ij* ∑ *k*2=1 *p* (*k*2) *ij r* (*<sup>k</sup>*2) *ij* − *p* (*k*2) *qj r* (*<sup>k</sup>*2) *qj* 2 ⎞ ⎟⎟⎟⎟⎟⎟⎠ ⎞ ⎟⎟⎟⎟⎟⎟⎠ 2 (25)

where *j* = 1, 2, . . . , *n*.

*m*

*m*

Obviously, *wj* ≥ 0 ∀ *j*. By normalizing Equation (25), we get:

$$\begin{aligned} \boldsymbol{w}\_{\mathcal{V}} &= \frac{\sum\_{i=1}^{\sum} \sum\_{j=1}^{D} \boldsymbol{D}(h\_{ij}, h\_{ij})}{\sum\_{i=1}^{m} \sum\_{j \neq i} \sum\_{j \neq i} \boldsymbol{D}(h\_{ij}, h\_{ij})} \\ &\quad \sum\_{i=1}^{m} \sum\_{j \neq i} \left( \sqrt{\frac{\frac{1}{\#l\_{ij}(p\_{ij})} \sum\_{k\_{1}=1}^{\#l\_{ij}(p\_{ij})} \left(p\_{ij}^{(k\_{1})} r\_{ij}^{(k\_{1})} - p\_{ij}^{(k\_{1})} r\_{ij}^{(k\_{1})}\right)^{2}}{\sqrt{\frac{\#l\_{ij}(p\_{ij}^{\prime})}{\#l\_{ij}(p\_{ij}^{\prime})} \sum\_{k\_{2}=1}^{\#l\_{ij}(p\_{ij})} \left(p\_{ij}^{(k\_{2})} r\_{ij}^{(k\_{2})} - p\_{ij}^{(k\_{2})} r\_{ij}^{(k\_{2})}\right)^{2}}\right)}{\sum\_{j=1}^{m} \sum\_{i=1}^{m} \sum\_{j \neq i} \left( \sqrt{\frac{\frac{\#l\_{ij}(p\_{ij})}{\#l\_{ij}(p\_{ij})} \sum\_{k\_{1}=1}^{\#l\_{ij}(p\_{ij})} \left(p\_{ij}^{(k\_{1})} r\_{ij}^{(k\_{1})} - p\_{ij}^{(k\_{1})} r\_{ij}^{(k\_{1})}\right)^{2}}{\sum\_{j=1}^{m} \sum\_{i=1}^{m} \left(p\_{ij}^{(k\_{2})} r\_{ij}^{(k\_{2})} - p\_{ij}^{(k\_{2})} r\_{ij}^{(k\_{2})}\right)^{2}}\right)} \tag{26}$$

where *j* = 1, 2, . . . , *n*.

The above end result can be applied to the situations where the information of attribute weights is completely unknown. However, in real life decision making problems, the weight information is usually partly known. In such cases, let *H* be a set of the known weight information, which can be given in the following forms based on the literature [31–34].

Form 1. A weak ranking: , *wi* ≥ *wj* / (*i* = *j*). Form 2. A strict ranking: , *wi* − *wj* ≥ *βi* / (*i* = *j*). Form 3. A ranking of differences: , *wi* − *wj* ≥ *wk* − *wl* / (*j* = *k* = *l*). Form 4. A ranking with multiples: , *wi* ≥ *βiwj* / (*i* = *j*). Form 5. An interval form: , *βi* ≤ *wj* ≤ *βi* + *i* / (*i* = *j*). *βi* and *i* denote the non-negative numbers. With the set *H*, we can build the following model: 

$$M\_2 = \left\{ \begin{array}{c} \max D \ (w) = \sum\_{j=1}^{\mathcal{U}} \sum\_{i=1}^{\mathcal{U}} \sum\_{q \neq i}^{\mathcal{U}} w\_j D \ (h\_{ij}, h\_{qj}) \\\ w\_j \in H, w\_j \ge 0, j = 1, 2, \dots, n\_{\prime} \sum\_{j=1}^{\mathcal{U}} w\_j^2 = 1 \end{array} \right.$$

from which the optimal weight vector *w* = (*<sup>w</sup>*1, *w*2,..., *wn*) *t* obtained.

#### **5. MAGDM with Probabilistic Hesitant Intuitionistic Linguistic Information**

In this section, two practical methods, i.e., an extended TOPSIS method and an aggregation based method, for MAGDM problems are proposed, where the opinions of DMs take the form of PHILTSs.

#### *5.1. Extended TOPSIS Method for MAGDM with Probabilistic Hesitant Intuitionistic Linguistic Information*

Of the numerous MAGDM methods, TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) is one of the effective methods for ranking and selecting a number of possible alternatives by measuring Euclidean distances. It has been successfully applied to solve evaluation problems with a finite number of alternatives and criteria [19,24,28] because it is easy to understand and implement, and can measure the relative performance for each alternative.

In the following, we discuss the complete construction of extended TOPSIS method in PHILTS regard. This methodology involves the following steps.

Step 1: Analyze the given MAGDM problem; since the problem is group decision making, so let there be "*l*" decision makers or experts *M* = {*<sup>m</sup>*1, *m*2,..., *ml*} involved in the given problem. The set of alternatives is *x* = {*<sup>x</sup>*1, *x*2,..., *xm*} and the set of attributes is *c* = {*<sup>c</sup>*1, *c*2,..., *cn*}. The experts provide their linguistic evaluation values for membership and non-membership by using linguistic term set *S* = , *<sup>s</sup>*0,*s*1,...,*sg* / over the alternative *xi* (*i* = 1, 2, . . . , *m*) with respect to the attribute *cj* (*j* = 1, 2, . . . , *<sup>n</sup>*).

The DM *mk* (*k* = 1, 2, . . . , *l*) states his membership and non-membership linguistic evaluation values keeping in mind all the alternatives and attributes in the form of PHILTEs. Thus, intuitionistic probabilistic linguistic decision matrix *H<sup>k</sup>* = ) *l k ij* - *pij* . , *l* (*k*) *ij p ij*\* *m*×*n* is constructed. It should be noted that preference of alternative "*xi*" with respect to decision maker " *mk*" and attribute "*cj*" is denoted as PHILTE *Akij* - *pij* . in a group decision making problem with "*l*" experts.

Step 2: Calculate the one probabilistic hesitant intuitionistic linguistic decision matrix *H* by aggregating the opinions of DMs *<sup>H</sup>*(1), *<sup>H</sup>*(2),..., *H*(*l*) ; *H* = I *hij* J , where

$$\begin{array}{l} h\_{ij} = \left\langle \left\{ s\_{\overline{n}\_{ij}} \left( p\_{ij} \right), s\_{\overline{n}\_{ij}} \left( q\_{ij} \right) \right\}, \left\{ s'\_{\overline{m}\_{ij}} \left( p'\_{ij} \right), s'\_{\overline{n}\_{ij}} \left( q'\_{ij} \right) \right\} \right\} \right\rangle \text{ where} \\ s\_{\overline{n}\_{ij}} \left( p\_{ij} \right) = \min \left\{ \min\_{k=1}^{l} \left( \max l^{k}\_{ij} \left( p\_{ij} \right) \right), \max\_{k=1}^{l} \left( \min l^{k}\_{ij} \left( p\_{ij} \right) \right) \right\}, \\ s\_{n\_{ij}} \left( q\_{ij} \right) = \max \left\{ \min\_{k=1}^{l} \left( \max l^{k}\_{ij} \left( q\_{ij} \right) \right), \max\_{k=1}^{l} \left( \min l^{k}\_{ij} \left( q\_{ij} \right) \right) \right\}, \\ s\_{\overline{n'\_{ij}}} \left( p'\_{ij} \right) = \min \left\{ \min \left( \max l^{k}\_{ij} \left( p'\_{ij} \right) \right), \max\_{k=1}^{l} \left( \min l^{k}\_{ij} \left( p'\_{ij} \right) \right) \right\}, \\ s\_{n'\_{ij}} \left( q'\_{ij} \right) = \max \left\{ \min \left( \max l^{k}\_{ij} \left( q'\_{ij} \right) \right), \max \left( \min l^{k}\_{ij} \left( q'\_{ij} \right) \right) \right\}, \\ \text{Hom-norm} \left( n' \right) \text{ and } n' \text{ is } k \text{ ( $n$ ), and  $\text{rank } n'$  is the} \end{array} \right\}$$

Here, max *l ij pij* .and min *l ij pij* .are taken according to the maximum and minimum value of *pij* × *rlij*, *l* = 1, 2, ... , #*l k ij* - *pij* ., respectively, where *rlij* denotes the lower index of the *lth* linguistic term and *pij* is its corresponding probability.

,

In this aggregated matrix *H*, the preference of alternative *ai* with respect to attribute *cj* is denoted as *hij*.

Each term of the aggregated matrix *H* i.e., *hij* is also an PHILTE; for this, we have to prove that

*smij* -*pij*. + *s nij q ij* ≤ *sg* and *snij* -*qij*. + *s mij p ij* ≤ *sg*. Since we know that )*lkij* -*pij*. , *l ij p ij*\* is a PHILTS for every *kth* expert, *ith* alternative and *jth* attribute, a PHILTS it must satisfy the conditions min *l*(*k*) *ij* + max 5*l* (*k*) *ij* 6 ≤ *sg* , max *l*(*k*) *ij* + min 5*l* (*k*) *ij* 6 ≤ *sg*. 

Thus, the above simple construction of *smij* -*pij*., *snij* -*qij*., *s mij p ij*, and *sn ij q ij*guarantees that the *hij* is a PHILTE.

Step 3: Normalize the probabilistic hesitant intuitionistic linguistic decision matrix *H* = I*hij*J according to the method in Section 3.1.

Step 4: Obtain the weight vector *w* = (*<sup>w</sup>*1, *w*2,..., *wn*)*<sup>t</sup>* of the attributes *cj* (*j* = 1, 2, . . . , *<sup>n</sup>*). *wj* = *m* ∑ *i*=1 ∑ *q* =*i <sup>D</sup>*(*hij*,*hqj*) *n* ∑ *j*=1*m* ∑ *i*=1∑ *q* =*i<sup>D</sup>*(*hij*,*hqj*) = *m* ∑ *i*=1 ∑ *q* =*i <sup>d</sup>*(*lij*(*pij*),*lqj*(*pqj*))+*d<sup>l</sup> ij<sup>p</sup> ij*,*<sup>l</sup> qj<sup>p</sup> qj n* ∑ *j*=1*m* ∑ *i*=1∑ *q* =*i<sup>d</sup>*(*lij*(*pij*),*lqj*(*pqj*))+*d<sup>l</sup> ij<sup>p</sup> ij*,*<sup>l</sup> qj<sup>p</sup> qj* , *j* = 1, 2, . . . , *n*

Step 5: The PHILTS positive ideal solution (PHILTS-PIS) of alternatives, denoted by *A*<sup>+</sup> = *l*+ (*p*), *l* + (*p*), is defined as follows:

$$A^{+} = \left\langle l^{+}\left(p\right) = \left(l\_1^{+}\left(p\right), l\_2^{+}\left(p\right), \dots, l\_n^{+}\left(p\right)\right), l^{'+}\left(p\right) = \left(l\_1^{'+}\left(p\right), l\_2^{'+}\left(p\right), \dots, l\_n^{'+}\left(p\right)\right) \right\rangle \tag{27}$$

where *l*+*j* (*p*) = *l*(*k*1) *j* + |*k*1 = 1, 2, . . . , #*lij* (*p*)and *l*(*k*1) *j* + = *<sup>s</sup>*max*i p*(*k*1) *ij r*(*k*1) *ij* , *k*1 = 1, 2, . . . , #*lij* (*p*), *j* = 1, 2, ... , *n* and *r*(*k*1) *ij* is lower index of the linguistic term *l*(*k*1) *ij* while *l* +*j* (*p*) = *l* (*k*2) *j* + |*k*2 = 1, 2, . . . , #*l ij* (*p*) and *l* (*k*2) *j* + = *<sup>s</sup>*min*i p* (*k*2) *ij r* (*k*2) *ij* , *k*2 = 1, 2, ... , #*l ij* (*p*), *j* = 1, 2, ... , *n* and *r* (*k*2) *ij* is lower index of the linguistic term *l* (*k*2) *ij* . Similarly, the PHILTS negative ideal solution (PHILTS-NIS) of alternatives, denoted by *A*− = *l*− (*p*), *l* − (*p*), is defined as follows:

$$A^- = \left\langle l^-\left(p\right) = \left(l\_1^-\left(p\right), l\_2^-\left(p\right), \dots, l\_n^-\left(p\right)\right), l^{'-}\left(p\right) = \left(l\_1^{'-}\left(p\right), l\_2^{'-}\left(p\right), \dots, l\_n^{'-}\left(p\right)\right)\right\rangle \tag{28}$$

where *l*−*j* (*p*) = *l*(*k*1) *j* − |*k*1 = 1, 2, . . . , #*lij* (*p*) and *l*(*k*1) *j* − = *<sup>s</sup>*min*i p*(*k*1) *ij r*(*k*1) *ij* , *k*1 = 1, 2, ... , #*lij* (*p*), *j* = 1, 2, ... , *n* and *r*(*k*1) *ij* is lower index of the linguistic term *l*(*k*1) *ij* while *l* −*j* (*p*) = *l* (*k*2) *j* − |*k*2 = 1, 2, . . . , #*l ij* (*p*) and *l* (*k*2) *j* + = *<sup>s</sup>*max*i p* (*k*2) *ij r* (*k*2) *ij* , *k*2 = 1, 2, ... , #*l ij* (*p*); *j* = 1, 2, . . . , *n* and *r* (*k*2) *ij* is lower index of the linguistic term *l* (*k*2) *ij* .

$$\begin{aligned} \text{Step 6: Compute the deviation degree between each alternative x\_iPHITS-PIS } A^+ \text{ as follows:}\\ D\left(x\_{i\cdot}^\* A^+\right) &= \sum\_{j=1}^n w\_j D\left(h\_{ij}^-, A^+\right) = \sum\_{j=1}^n w\_j \left(d\left(l\_{ij}\left(p\right), l\_j^+\left(p\right)\right) + d\left(l\_{ij}^{'}\left(p\right), l\_j^{'+}\left(p\right)\right)\right) \\ &= \sum\_{j=1}^n w\_j \left(\sqrt{\frac{1}{\#l\_{ij}(p)} \sum\_{k\_1=1}^n \left(p\_{ij}^{(k\_1)} r\_{ij}^{(k\_1)} - \left(p\_{j}^{(k\_1)} r\_{j}^{(k\_1)}\right)^+\right)^2}{\sqrt{\frac{1}{\#l\_{ij}(p)} \sum\_{k\_2=1}^n \left(p\_{ij}^{'} \left(r\_{ij}^{'}\right) - \left(p\_{j}^{'} \left(r\_{ij}^{'}\right)^+ \right)^+\right)^2}}\right)^2} \end{aligned} \tag{29}$$

The smaller is the deviation degree *D* (*xi*, *<sup>A</sup>*+), the better is alternative *xi*.

$$\begin{aligned} \text{Similarly, compute the deviation degree between each alternative x\_iPHILIS-NIS } A^- \text{ as follows:}\\ D\left(\mathbf{x}\_i, A^-\right) &= \sum\_{j=1}^n w\_j D\left(h\_{ij}, A^-\right) = \sum\_{j=1}^n w\_j \left( d\left(l\_{ij}\left(p\right), l\_j^{-1}\left(p\right)\right) + d\left(l\_{ij}^{'}\left(p\right), l\_j^{'-}\left(p\right)\right) \right) \\ &= \sum\_{j=1}^n w\_j \left( \sqrt{\frac{1}{\mathfrak{w}\_{ij}(p)} \sum\_{k\_1=1}^n \left(p\_{ij}^{(k\_1)} r\_{ij}^{(k\_1)} - \left(p\_j^{(k\_1)} r\_{ij}^{(k\_1)}\right)^{-2}\right)^2}{\sqrt{\frac{1}{\mathfrak{w}\_{ij}^2(p)} \sum\_{k\_2=1}^n \left(p\_{ij}^{'(k\_2)} r\_{ij}^{(k\_2)} - \left(p\_j^{'(k\_2)} r\_{ij}^{(k\_2)}\right)^{-2}\right)^2}} \right) \end{aligned} \tag{30}$$

The larger is the deviation degree *D* (*xi*, *<sup>A</sup>*−), the better is alternative *xi*. Step 7: Determine *D*min (*xi*, *A*+) and *D*max (*xi*, *<sup>A</sup>*−), where

$$D\_{\min} \left( \mathbf{x}\_{i\prime} \boldsymbol{A}^{+} \right) = \min\_{1 \le i \le m} D \left( \mathbf{x}\_{i\prime} \boldsymbol{A}^{+} \right) \tag{31}$$

and

$$D\_{\max} \left( \mathbf{x}\_{i\prime} \, A^{-} \right) = \max\_{1 \le i \le m} D \left( \mathbf{x}\_{i\prime} \, A^{-} \right) \tag{32}$$

Step 8: Determine the closeness coefficient *Cl* of each alternative *xi* to rank the alternatives.

$$\mathcal{Cl}\left(\mathbf{x}\_{i}\right) = \frac{D\left(\mathbf{x}\_{i\prime}A^{-}\right)}{D\_{\text{max}}\left(\mathbf{x}\_{i\prime}A^{-}\right)} - \frac{D\left(\mathbf{x}\_{i\prime}A^{+}\right)}{D\_{\text{min}}\left(\mathbf{x}\_{i\prime}A^{+}\right)}\tag{33}$$

Step 9: Pick the best alternative *xi* on the basis of the closeness coefficient *Cl*, where the larger is the closeness coefficient *Cl* (*xi*), the better is alternative *xi*. Thus, the best alternative

$$\mathbf{x}^{b} = \left\{ \mathbf{x}\_{i} \Big| \max\_{1 \le i \le m} \mathbf{C}l \, (\mathbf{x}\_{i}) \right\} \tag{34}$$

#### *5.2. The Aggregation-Based Method for MAGDM with Probabilistic Hesitant Intuitionistic Linguistic Information*

In this subsection, the aggregation-based method for MAGDM is presented, where the preference opinions of DMs are represented by PHILTS. In Section 4, we have developed some aggregation operators, i.e., PHILA, PHILWA, PHILG and PHILWG. In this algorithm, we use PHILWA operator to aggregate the attribute values of each alternative *xi*, into the overall attribute values. The following steps are involved in this algorithm. The first four Steps are similar to the extended TOPSIS method. Therefore, we go to Step 5.

Step 5: Determine the overall attribute values *Z i* (*w*) (*i* = 1, 2, . . . , *<sup>m</sup>*), where *w* = (*<sup>w</sup>*1, *w*2,..., *wn*)*<sup>T</sup>* is the weight vector of attributes, using PHILWA operator, this can be expressed as follows:

*Z i* (*w*) = *w*1 *li*1 (*p*), *l i*1 *p* ⊕ *w*2 *li*2 (*p*), *l i*2 *p* ⊕ ... ⊕ *wn lin* (*p*), *l in p* = *<sup>w</sup>*1*li*1 (*p*) ⊕ *<sup>w</sup>*2*li*2 (*p*) ⊕ ... ⊕ *wnlin* (*p*), *<sup>w</sup>*1*<sup>l</sup> i*1 *p* ⊕ *<sup>w</sup>*2*<sup>l</sup> i*2 *p* ⊕ ... ⊕ *wnl in p* = ∪*l*(*<sup>k</sup>*1) *i*1 <sup>∈</sup>*li*1(*p*) *w*1 *p*(*k*1) *i*1 *l*(*k*1) *i*1 ⊕ ∪*l*(*<sup>k</sup>*1) *i*2 <sup>∈</sup>*li*2(*p*) *w*2 *p*(*k*1) *i*2 *l*(*k*1) *i*2 ⊕ ... ⊕ ∪*l*(*<sup>k</sup>*1) *in* <sup>∈</sup>*lin*(*p*) *wn p*(*k*1) *in l*(*k*1) *in* , ∪*l* (*<sup>k</sup>*2) *i*1 ∈*<sup>l</sup> i*1(*<sup>p</sup>* ) *w*1 *p* (*k*2) *i*1 *l* (*k*2) *i*1 ⊕ ∪*l* (*<sup>k</sup>*2) *i*2 ∈*<sup>l</sup> i*2(*<sup>p</sup>* ) *w*2 *p* (*k*2) *i*2 *l* (*k*2) *i*2 ⊕ ... ⊕ ∪*l* (*<sup>k</sup>*2) *in* ∈*<sup>l</sup> in*(*<sup>p</sup>* ) *wn p* (*k*2) *in l* (*k*2) *in* (35)

where *i* = 1, 2, . . . , *m*.

Step 6: Compare the overall attribute values *Z i* (*w*) (*i* = 1, 2, . . . , *m*) mutually, based on their score function and deviation degree whose detail is given in Section 3.2.

Step 7: Rank the alternatives *xi* (*i* = 1, 2, . . . , *m*) according to the order of *Z i* (*w*) (*i* = 1, 2, . . . , *m*) and pick the best alternative.

The flow chart of the proposed models is presented in Figure 1.

**Figure 1.** Extended TOPSIS and Aggregation-based models.

#### **6. A Case Study**

To validate the proposed theory and decision making models, in this section, a practical example taken from [28] is solved. A group of seven peoples *ml* (*l* = 1, 2, 3, . . . , 7) need to invest their savings in a most profitable way. They considered five possibilities: *x*1 is real estate, *x*2 is stock market, *x*3 is T-bills, *x*4 is national saving scheme, and *x*5 is insurance company. To determine best option, the following attributes are taken into account: *c*1 is the risk factor, *c*2 is the growth, *c*3 is quick refund, and *c*4 is complicated documents requirement. Base upon their knowledge and experience, they provide their opinion in terms of following HIFLTSs.

#### *6.1. The Extended TOPSIS Method for the Considered Case*

We handle the above problem by applying the extended TOPSIS method.

Step 1: The probabilistic hesitant intuitionistic linguistic decision matrices derived from Tables 1–3 are shown in Tables 4–6, respectively.


**Table 1.** Decision matrix provided by the DMs 1, 2, 3 (*<sup>m</sup>*1, *m*2, *<sup>m</sup>*3).

**Table 2.** Decision matrix provided by the DMs 4, 5 (*<sup>m</sup>*4, *<sup>m</sup>*5).



**Table 3.** Decision matrix provided by the DMs 6, 7 (*<sup>m</sup>*6, *<sup>m</sup>*7).

**Table 4.** Probabilistic hesitant intuitionistic linguistic decision matrix *H*1 with respect to DMs 1, 2, 3 (*<sup>m</sup>*1, *m*2, *<sup>m</sup>*3).


**Table 5.** Probabilistic hesitant intuitionistic linguistic decision matrix *H*2 with respect to DMs 4, 5 (*<sup>m</sup>*4, *<sup>m</sup>*5).


**Table 6.** Probabilistic hesitant intuitionistic linguistic decision matrix *H*3 with respect to DMs 6, 7 (*<sup>m</sup>*6, *<sup>m</sup>*7).


Step 2: The decision matrix H in Table 7 is constructed by utilizing Tables 4–6.

**Table 7.** Decision matrix (H).


Step 3: The normalized probabilistic hesitant intuitionistic linguistic decision matrix of the group is shown in Table 8.

**Table 8.** The normalized probabilistic hesitant intuitionistic linguistic decision matrix.


Step 4: The weight vector is derived from Equation (26) as follows:

*w* = (0.2715, 0.2219, 0.2445, 0.2621)*<sup>t</sup>*

Step 5: The PHILTS-PIS "*A*+" and the PHILTS-NIS "*A*−" of each alternative are derived using Equations (27) and (28) as follows:

*A*<sup>+</sup> = ({3, 3} , {0, <sup>0</sup>},{3, 2.4} , {0, <sup>0</sup>},{3, 1.6} , {0, <sup>0</sup>},{3, 2.5} , {0, 0})

*A*− = ({0, 0.661} , {2.25, <sup>1</sup>},{1, 1} , {2.25, 1.25},{.5, 0.66} , {2, 1.6},{1, 0.2} , {2, 1.6})

*D* (*<sup>x</sup>*1, *A*+) = 2.1211, *D* (*<sup>x</sup>*2, *A*+) = 2.5516, *D* (*<sup>x</sup>*3, *A*+) = 2.9129, *D* (*<sup>x</sup>*4, *A*+) = 1.7999, *D* (*<sup>x</sup>*5, *A*+) = 1.6494

*D* (*<sup>x</sup>*1, *<sup>A</sup>*−) = 2.0142, *D* (*<sup>x</sup>*2, *<sup>A</sup>*−) = 1.5861, *D* (*<sup>x</sup>*3, *<sup>A</sup>*−) = 1.6204, *D* (*<sup>x</sup>*4, *<sup>A</sup>*−) = 2.4056, *D* (*<sup>x</sup>*5, *<sup>A</sup>*−) = 2.2812

Step 7: Calculate *D*min (*xi*, *A*+) and *D*max (*xi*, *<sup>A</sup>*−) by Equations (31) and (32) :

*D*min (*xi*, *A*+) = 1.6494, *D*max (*xi*, *<sup>A</sup>*−) = 2.4050

Step 8: Determine the closeness coefficient of each alternative *xi* by Equation (33) :

*Cl* (*<sup>x</sup>*1) = −0.4486, *Cl* (*<sup>x</sup>*2) = −0.8876, *Cl* (*<sup>x</sup>*3) = −1.0924, *Cl* (*<sup>x</sup>*4) = −0.0912, *Cl* (*<sup>x</sup>*5) = −0.0519 Step 9: Rank the alternatives according to the ranking of *Cl* (*xi*) (*i* = 1, 2, . . . , <sup>5</sup>): *x*5 > *x*4 > *x*1 >

*x*2 > *x*3, and thus, *x*5(insurance company) is the best alternative.

*6.2. The Aggregation-Based Method for the Considered Case*

We can also apply the aggregation-based method to attain the ranking of alternatives for the case study.

Step 1: Construct the probabilistic hesitant intuitionistic fuzzy decision matrices of the group as listed in Tables 4–6, and then aggregated and normalized as shown in Tables 7 and 8.

Step 2: Utilize Equation (26) to obtain the weight vector

.

*w* = (0.2715, 0.2219, 0.2445, 0.2621) *t* 

Step 3: Derive the overall attribute value of each alternative *xi* (*i* = 1, 2, 3, 4, 5) by using Equation (35) :

*Z* 1 (*w*) = {*<sup>s</sup>*1.8962,*s*0.5187} , {*<sup>s</sup>*1.2847,*s*0.5187}, *Z* Q2 (*w*) = {*<sup>s</sup>*1.4074,*s*0.9776} , {*<sup>s</sup>*1.4679,*s*0.4934}, *Z* Q3 (*w*) = {*<sup>s</sup>*1.7923,*s*1.1256} , {*<sup>s</sup>*1.8096,*s*0.9915} , *Z* Q4 (*w*) = {*<sup>s</sup>*2.1467,*s*1.642} , {*<sup>s</sup>*0.7977,*s*0.8886}, *Z* Q5 (*w*) = {*<sup>s</sup>*2.0596,*s*1.8546} , {*<sup>s</sup>*1.0267,*s*0.8043}. Step 4: Compute the score of each attribute value *Zi* (*w*) by Definition 14: *E Z* Q1 (*w*) = *s*3.1528, *E Z* Q2 (*w*) = *s*3.1059, *E Z* Q3 (*w*) = *s*3.0584, *E Z* Q4 (*w*) = *s*4.0512, Q

*E Z* 5 (*w*) = *s*5.8726

Q

Step 5: Compare the overall attribute values of alternatives according to the values of the score function. It is obvious, that *x*5 > *x*4 > *x*1 > *x*2 > *x*3. Thus, again, we ge<sup>t</sup> the best alternative *x*5.

#### **7. Discussions and Comparison**

For the purpose of comparison, in this subsection, the case study is again solved by applying the TOPSIS method with traditional HIFLTSs.

Step 1: The decision matrix X in Table 9 is constructed by utilizing Tables 1–3 as follows:


**Table 9.** Decision matrix (X).

Step 2: Determine the HIFLTS-PIS " *P*+" and the HIFLTS-NIS " *P*−" for cost criteria c1,c4 and benefit criteria c2,c3 as follows:

*P*<sup>+</sup> = [([*<sup>s</sup>*0,*s*1] , [*<sup>s</sup>*3,*s*4]),([*<sup>s</sup>*5,*s*6] , [*<sup>s</sup>*0,*s*0]),([*<sup>s</sup>*5,*s*6] , [*<sup>s</sup>*0,*s*0]),([*<sup>s</sup>*0,*s*1] , [*<sup>s</sup>*3,*s*4])] *P*− = [([*<sup>s</sup>*6,*s*6] , [*<sup>s</sup>*0,*s*0]),([*<sup>s</sup>*1,*s*2] , [*<sup>s</sup>*3,*s*5]),([*<sup>s</sup>*0,*s*1] , [*<sup>s</sup>*3,*s*4]),([*<sup>s</sup>*6,*s*6] , [*<sup>s</sup>*0,*s*0])] Note: One can see the detail of HIFLTS-PIS " *P*+" and the HIFLTS-NIS " *P*−" in [28]. Step 3: Calculate the positive ideal matrix *D*<sup>+</sup> and the negative ideal matrix *D*− as follows:

$$D^{+} = \begin{bmatrix} 8+1+12+5\\ 4+11+2+14\\ 9+7+2+2\\ 15+9+14+12\\ 15+12+14+16 \end{bmatrix} = \begin{bmatrix} 26\\ 31\\ 20\\ 50\\ 57 \end{bmatrix}$$

*<sup>D</sup>*+11 = *d* -*<sup>x</sup>*11, *v*+1 . + *d* -*<sup>x</sup>*12, *v*+2 . + *d* -*<sup>x</sup>*13, *v*+3 . + *d* -*<sup>x</sup>*14, *v*+4 . in which *d* -*<sup>x</sup>*11, *v*+1 . = *d* (([*<sup>s</sup>*2,*s*4] , [*<sup>s</sup>*1,*s*3]),([*<sup>s</sup>*0,*s*1] , [*<sup>s</sup>*3,*s*4])) = |2 − 0| + |4 − 1| + |1 − 3| + |3 − 4| = 8

Other entries can be found by similar calculation. 

$$D^{-} = \begin{bmatrix} 10+15+5+13\\ 14+5+15+4\\ 9+9+15+16\\ 3+7+3+6\\ 3+4+3+2 \end{bmatrix} = \begin{bmatrix} 43\\ 38\\ 49\\ 19\\ 12 \end{bmatrix}$$

Step 4: The relative closeness(*RC*) of each alternative to the ideal solution can be obtained as follows:

*RC*(*<sup>x</sup>*1) = 43/ (26 + 43) = 0.6232 *RC*(*<sup>x</sup>*2)38/(31 + 38)0.5507

 The *RC* of other alternatives can be find by similar calculations.

*RC*(*<sup>x</sup>*3) = 0.7101 , *RC*(*<sup>x</sup>*4) = 0.2754 , *RC*(*<sup>x</sup>*5) = 0.1739.

 =

Step 5: The ranking of alternatives of alternatives *xi* (*i* = 1, 2, . . . , 5) according to the closeness coefficient *RC*(*xi*) is:

*x*3 > *x*1 > *x*2 > *x*4 > *x*5.

 =



The comparisons and other aspects are summarized in Table 11.


**Table 11.** The advantages and limitations of the proposed methods.
