**2. Preliminaries**

In this section, we briefly review concepts about *q*-ROFS, PIFS, linguistic term sets and HM. Meanwhile, we provide the definitions of *q*-PRFS and *q*-RPLS.

#### *2.1. q-Rung Orthopair Fuzzy Set (q-ROFS) and q-Rung Picture Fuzzy Set (q-RPFS)*

**Definition 1 [26]**. *Let X be an ordinary fixed set, a q-ROFS A defined on X is given by*

$$A = \{ \langle \mathbf{x}, \mu\_A(\mathbf{x}), \upsilon\_A(\mathbf{x}) \rangle \, | \, \mathbf{x} \in X \},\tag{1}$$

*where uA*(*x*) *and vA*(*x*) *represent the membership degree and non-membership degree respectively, satisfying uA*(*x*) ∈ [0, 1]*, vA*(*x*) ∈ [0, 1] *and* 0 ≤ *uA*(*x*) *q* + *vA*(*x*) *q* ≤ 1*,* (*q* ≥ 1)*. The indeterminacy degree is defined as <sup>π</sup>A*(*x*) = - *uA*(*x*) *q* + *vA*(*x*) *q* − *uA*(*x*) *q vA*(*x*) *q*.1/*q. For convenience,* (*uA*(*x*), *vA*(*x*)) *is called a q-ROFN by Liu and Wang* [27]*, which can be denoted by a* = (*<sup>u</sup>*, *<sup>v</sup>*)*.*

Liu and Wang [27] also proposed some operations for *q*-ROFNs.

**Definition 2 [27].** *Let a*1 = (*<sup>u</sup>*1, *<sup>v</sup>*1)*, a*2 = (*<sup>u</sup>*2, *<sup>v</sup>*2) *be two q-ROFNs, and λ be a positive real number, then*

,

,

.

$$\begin{aligned} 1. \qquad \widetilde{a}\_1 \oplus \widetilde{a}\_2 &= \left( \left( u\_1^q + u\_2^q - u\_1^q u\_2^q \right)^{1/q}, v\_1 v\_2 \right), \\ 2. \qquad \widetilde{a}\_1 \otimes \widetilde{a}\_2 &= \left( u\_1 u\_2, \left( v\_1^q + v\_2^q - v\_1^q v\_2^q \right)^{1/q} \right), \end{aligned}$$

$$\text{2.} \qquad \tilde{u}\_1 \otimes \tilde{u}\_2 = \left( u\_1 u\_{2'} \left( v\_1^q + v\_2^q - v\_1^q v\_2^q \right)^{1/q} \right)$$

$$\text{3.} \qquad \lambda \widetilde{u}\_1 = \left( \left( 1 - \left( 1 - u\_1^q \right)^{\lambda} \right)^{1/q}, v\_1^{\lambda} \right)^{\lambda}$$

$$4. \qquad \tilde{a}\_1^{\lambda} = \left( u\_1^{\lambda} \left( 1 - \left( 1 - v\_1^q \right)^{\lambda} \right)^{1/q} \right).$$

To compare two *q*-ROFNs, Liu and Wang [27] proposed a comparison method for *q*-ROFNs.

**Definition 3 [27].** *Let a* = (*ua*, *va*) *be a q-ROFN, then the score of a is defined as <sup>S</sup>*(*a*) = *uq a* − *v q a, the accuracy of a is defined as <sup>H</sup>*(*a*) = *uq a* + *v q a. For any two q-ROFNs, a*1 = (*<sup>u</sup>*1, *<sup>v</sup>*1) *and a*2 = (*<sup>u</sup>*2, *<sup>v</sup>*2)*. Then*

	- (1) *If <sup>H</sup>*(*a*1) > *<sup>H</sup>*(*a*2)*, then a*1 > *a*2*;*
	- (2) *If <sup>H</sup>*(*a*1) = *<sup>H</sup>*(*a*2)*, then a*1 = *a*2*.*

The PIFS, constructed by a positive membership degree, a neutral membership degree as well as a negative membership degree, was originally proposed by Cuong [29].

**Definition 4 [29].** *Let X be an ordinary fixed set, a picture fuzzy set (PIFS) B defined on X is given as follows*

$$B = \{ \langle \mathbf{x}, \mu\_B(\mathbf{x}), \eta\_B(\mathbf{x}), \upsilon\_B(\mathbf{x}) \rangle | \mathbf{x} \in X \},\tag{2}$$

*where uB*(*x*) ∈ [0, 1] *is called the degree of positive membership of B, ηB*(*x*) ∈ [0, 1] *is called the degree of neutral membership of B and vB*(*x*) ∈ [0, 1] *is called the degree of negative membership of B, and uB*(*x*), *ηB*(*x*), *vB*(*x*) *satisfy the following condition:* 0 ≤ *uB*(*x*) + *ηB*(*x*) + *vB*(*x*) ≤ 1*,* ∀*x* ∈ *X. Then for x* ∈ *X, <sup>π</sup>B*(*x*) = 1 − (*uB*(*x*) + *ηB*(*x*) + *vB*(*x*)) *is called the degree of refusal membership of x in B.*

Motivated by the concepts of *q*-ROFS and PIFS, we give the definition of *q*-RPFS.

**Definition 5.** *Let X be an ordinary fixed set, a q-rung picture fuzzy set (q-RPFS) C defined on X is given as follows*

$$\mathcal{C} = \{ \langle \mathbf{x}, \mu\_{\mathcal{C}}(\mathbf{x}), \eta\_{\mathcal{C}}(\mathbf{x}), \upsilon\_{\mathcal{C}}(\mathbf{x}) \rangle | \mathbf{x} \in X \}, \tag{3}$$

*where uC*(*x*), *ηC*(*x*) *and vC*(*x*) *represent degree of positive membership, degree of neutral membership and degree of negative membership respectively, satisfying uC*(*x*) ∈ [0, 1]*, ηC*(*x*) ∈ [0, 1]*, vC*(*x*) ∈ [0, 1] *and* 0 ≤ *uC*(*x*)*<sup>q</sup>* + *<sup>η</sup>C*(*x*)*<sup>q</sup>* + *vC*(*x*)*<sup>q</sup>* ≤ 1 (*q* ≥ <sup>1</sup>)*,* ∀*x* ∈ *X. Then for x* ∈ *X, <sup>π</sup>C*(*x*) = -1 − -*uC*(*x*)*<sup>q</sup>* + *<sup>η</sup>C*(*x*)*<sup>q</sup>* + *vC*(*x*)*<sup>q</sup>*..1/*<sup>q</sup> is called the degree of refusal membership of x in C.*

*2.2. Linguistic Term Sets and q-Rung Picture Linguistic Set (q-RPLS)*

Let *S* = {*si*|*<sup>i</sup>* = 1, 2, ..., *t* } be a linguistic term set with odd cardinality and *t* is the cardinality of *S*. The label *si* represents a possible value for a linguistic variable. For instance, a possible linguistic term set can be defined as follows:

*S* = (*<sup>s</sup>*1,*s*2,*s*3,*s*4,*s*5,*s*6,*s*7) = {verypoor, poor, slightly poor, fair, slightly good, good, very good}.

Motivated by the concept of picture linguistic set [41], we shall define the concept of *q*-RPLS by combining the linguistic term set with *q*-RPFS.

**Definition 6.** *Let X be an ordinary fixed set, S be a continuous linguistic term set of S* = {*si*|*<sup>i</sup>* = 1, 2, ..., *t* }*, then a q-rung picture linguistic set (q-RPLS) D defined on X is given as follows*

$$D = \left\{ \left< s\_{\emptyset(x)}, \mu\_D(\mathbf{x}), \eta\_D(\mathbf{x}), \upsilon\_D(\mathbf{x}) \right> | \mathbf{x} \in X \right\},\tag{4}$$

*where <sup>s</sup>θ*(*x*) ∈ *S,uD*(*x*) ∈ [0, 1] *is called the degree of positive membership of D, ηD*(*x*) ∈ [0, 1] *is called the degree of neutral membership of D and vD*(*x*) ∈ [0, 1] *is called the degree of negative membership of D, and uD*(*x*), *ηD*(*x*), *vD*(*x*) *satisfy the following condition:* 0 ≤ *uD*(*x*)*<sup>q</sup>* + *<sup>η</sup>D*(*x*)*<sup>q</sup>* + *vD*(*x*)*<sup>q</sup>* ≤ <sup>1</sup>(*q* ≥ <sup>1</sup>)*,* ∀*x* ∈ *X. Then <sup>s</sup>θ*(*x*),(*uD*(*x*), *ηD*(*x*), *vD*(*x*)) *is called a q-RPLN, which can be simply denoted by α* = *<sup>s</sup>θ*,(*<sup>u</sup>*, *η*, *<sup>v</sup>*)*. When q = 1, then D is reduced to the picture linguistic set (PFLS) proposed by Liu and Zhang [41].*

In the following, we provide some operations for *q*-RPLNs.

**Definition 7.** *Let α* = *<sup>s</sup>θ*,(*<sup>u</sup>*, *η*, *<sup>v</sup>*)*, α*1 = 8*<sup>s</sup>θ*1 ,(*<sup>u</sup>*1, *η*1, *<sup>v</sup>*1)9 *and α*2 = 8*<sup>s</sup>θ*2 ,(*<sup>u</sup>*2, *η*2, *<sup>v</sup>*2)9 *be three q-RPLNs and λ be a positive real number, then*

1. *α*1 ⊕ *α*2 = *<sup>s</sup>θ*1+*θ*2 , 5*uq*1 + *uq*2 − *<sup>u</sup>q*1*uq*21/*q*, *η*1*η*2, *<sup>v</sup>*1*v*26,

$$\begin{aligned} \text{12.} \qquad \mathfrak{u}\_1 \otimes \mathfrak{u}\_2 &= \left\langle \mathop{\rm s}\_{\theta\_1 \otimes \theta\_2 \prime} \left( \mathfrak{u}\_1 \mathfrak{u}\_2, \left( \eta\_1^q + \eta\_2^q - \eta\_1^q \eta\_2^q \right)^{1/q}, \left( \upsilon\_1^q + \upsilon\_2^q - \upsilon\_1^q \upsilon\_2^q \right)^{1/q} \right) \right\rangle . \end{aligned}$$

$$\mathfrak{A} . \qquad \lambda \mathfrak{a} = \left\langle s\_{\lambda \times \theta'} \left( \left( 1 - (1 - \mathfrak{u}^{\mathfrak{q}})^{\lambda} \right)^{1/q}, \eta^{\lambda}, \upsilon^{\lambda} \right) \right\rangle .$$

$$4. \qquad a^{\lambda} = \left\langle s\_{\theta^{\lambda}}, \left(\mu^{\lambda}, \left(1 - (1 - \eta^{q})^{\lambda}\right)^{1/q}, \left(1 - (1 - \upsilon^{q})^{\lambda}\right)^{1/q}\right) \right\rangle.$$

To compare two *q*-RPLNs, we first propose the concepts of score function and accuracy function of a *q*-RPLN and based on which we propose a comparison law for *q*-RPLNs.

**Definition 8.** *Let α* = *<sup>s</sup>θ*,(*<sup>u</sup>*, *η*, *v*) *be a q-RPLN, then the score function of α is defined as*

$$S(\mathfrak{a}) = (\mathfrak{u}^q + 1 - \mathfrak{v}^q) \times \mathfrak{e} \tag{5}$$

**Definition 9.** *Let α* = *<sup>s</sup>θ*,(*<sup>u</sup>*, *η*, *v*) *be a q-RPLN, then the accuracy function of α is defined as*

$$H(a) = (\mathfrak{u}^q + \eta^q + \upsilon^q) \times \theta \tag{6}$$

**Definition 10.** *Let α*1 = 8*<sup>s</sup>θ*1 ,(*<sup>u</sup>*1, *η*1, *<sup>v</sup>*1)9 *and α*2 = 8*<sup>s</sup>θ*2 ,(*<sup>u</sup>*2, *η*2, *<sup>v</sup>*2)9 *be two q-RPLNs, <sup>S</sup>*(*<sup>α</sup>*1) *and <sup>S</sup>*(*<sup>α</sup>*2) *be score functions of α*1 *and α*2 *respectively, <sup>H</sup>*(*<sup>α</sup>*1) *and <sup>H</sup>*(*<sup>α</sup>*2) *be the accuracy functions of α*1 *and α*2 *respectively, then*

	- (1) *if <sup>H</sup>*(*<sup>α</sup>*1) > *<sup>H</sup>*(*<sup>α</sup>*2)*, then α*1 > *α*2*;* (2) *if <sup>H</sup>*(*<sup>α</sup>*1) > *<sup>H</sup>*(*<sup>α</sup>*2)*, then α*1 = *α*2*.*

## *2.3. Heronian Mean*

**Definition 11 [43,45].** *Let ai*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of crisp numbers, and s, t* ≥ 0*, then the Heronian mean (HM) is defined as follows:*

$$HM^{s,t}(a\_1, a\_2, \ldots, a\_n) = \left(\frac{2}{n(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n a\_i^s a\_j^t\right)^{1/(s+t)}\tag{7}$$

**Definition 12 [46].** *Let ai*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of crisp numbers, and s, t* ≥ 0*, then the geometric Heronian mean (GHM) is defined as follows:*

$$GHM^{s,t}(a\_1, a\_2, \ldots, a\_n) = \frac{1}{s+t} \prod\_{i=1}^n \prod\_{j=i}^n (sa\_i + ta\_j)^{\frac{1}{n(n+2)}} \tag{8}$$

#### **3. The** *q***-Rung Picture Linguistic Heronian Mean Operators**

In this section, we extend the HM to *q*-rung picture linguistic environment and propose a family of *q*-rung picture linguistic Heronian mean operators. Moreover, some desirable properties of the proposed aggregation operators are presented and discussed.

*3.1. The q-Rung Picture Linguistic Heronian Mean (q-RPLHM) Operator*

**Definition 13.** *Let αi* = 8*<sup>s</sup>θi*,(*μi*, *ηi*, *vi*)9(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, and s, t* > 0*. If*

$$(q - RPLHMM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) = \left(\frac{2}{n(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n a\_i^s a\_j^t\right)^{1/(s+t)}\tag{9}$$

*then q* − *RPLHMs*,*<sup>t</sup> is called the q-rung picture linguistic Heronian mean (q-RPLHM) operator.*

According to the operations for *q*-RPLNs, the following theorem can be obtained.

**Theorem 1.** *Let αi* = 8*<sup>s</sup>θi* ,(*μi*, *ηi*, *vi*)9(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, then the aggregated value by using q-RPLHM operator is also a q-RPLN and*

$$q - RPLHM^{q,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) = \left\langle S\_{\left(\frac{2}{n(n+1)}\sum\_{i=1}^n \frac{n}{i} \theta\_i^q \theta\_j^q\right)}^{1/(s+t)} \right\rangle^{1/(s+t)},$$

$$\left( \left(1 - \prod\_{i=1}^n \prod\_{j=1}^n \left(1 - u\_i^{qg} u\_j^{qg}\right)^{\frac{2}{n(n+1)}} \right)^{1/q(s+t)},$$

$$\left(1 - \left(1 - \prod\_{i=1}^n \prod\_{j=1}^n \left(1 - \left(1 - \eta\_i^q\right)^t \left(1 - \eta\_j^q\right)^t\right)^{\frac{2q}{n(n+1)}}\right)^{1/(s+t)} \right)^{1/q},\tag{10}$$

$$\left(1 - \left(1 - \prod\_{i=1}^n \prod\_{j=1}^n \left(1 - \left(1 - \eta\_i^q\right)^t \left(1 - \eta\_j^q\right)^t\right)^{\frac{2q}{n(s+t)}}\right)^{1/(s+t)}\right)^{1/q} \right).$$

**Proof.** According to the operations for *q*-RPLNs, we can obtain the followings

$$\begin{aligned} \alpha\_i^s &= \left\langle \mathbf{s}\_{\theta\_i^s}, \left( u\_{i'}^s \left( 1 - \left( 1 - \eta\_i^q \right)^s \right)^{1/q}, \left( 1 - \left( 1 - v\_i^q \right)^s \right)^{1/q} \right) \right\rangle, \\ \alpha\_j^t &= \left\langle \mathbf{s}\_{\theta\_j^t}, \left( u\_{j'}^t \left( 1 - \left( 1 - \eta\_j^q \right)^t \right)^{1/q}, \left( 1 - \left( 1 - v\_j^q \right)^t \right)^{1/q} \right) \right\rangle. \end{aligned}$$

Therefore,

$$\alpha\_i^s \alpha\_j^t = \left\langle \mathbf{s}\_{\theta\_i^s \theta\_j^t}, \left( \mathbf{u}\_i^s \mathbf{u}\_{j^t}^t \left( 1 - \left( 1 - \eta\_i^q \right)^s \left( 1 - \eta\_j^q \right)^t \right), \left( 1 - \left( 1 - \upsilon\_i^q \right)^s \left( 1 - \upsilon\_j^q \right)^t \right) \right) \right\rangle \right\rangle.$$

Further,

$$\begin{aligned} \sum\_{j=i}^{n} a\_i^s a\_j^t &= \left\langle s \prod\_{\substack{j=i \\ j \neq i}}^{n} \theta\_i^s \theta\_j^{t'} \left( \left( 1 - \prod\_{j=i}^{n} \left( 1 - u\_i^{sq} u\_j^{tq} \right) \right)^{1/q} \right)^{1/q} \right\rangle \\ &\prod\_{j=i}^{n} \left( 1 - \left( 1 - \eta\_i^q \right)^s \left( 1 - \eta\_j^q \right)^t \right) \prod\_{j=i}^{n} \left( 1 - \left( 1 - v\_i^q \right)^s \left( 1 - v\_j^q \right)^t \right) \right) \Big|\_{-} \end{aligned}$$

In addition,

$$\begin{aligned} \sum\_{i=1}^{n} \sum\_{j=i}^{n} a\_i^s a\_j^t &= \left\langle s\_{\sum\_{i=1}^{n} \sum\_{j=i}^{n} \theta\_j^s \theta\_j^{t'}} \left( \left( 1 - \prod\_{i=1}^{n} \left( \prod\_{j=i}^{n} \left( 1 - u\_i^{sq} u\_j^{tq} \right) \right) \right)^{1/q} \right)^{1/q} \right\rangle \\ \prod\_{i=1}^{n} \prod\_{j=i}^{n} \left( 1 - \left( 1 - \eta\_i^q \right)^s \left( 1 - \eta\_j^q \right)^t \right) \Big|\_{\substack{i=1 \ i \neq i}} \prod\_{j=i}^{n} \left( 1 - \left( 1 - \upsilon\_i^q \right)^s \left( 1 - \upsilon\_j^q \right)^t \right) \right) \Big|\_{\substack{i=1 \ i \neq i}} \end{aligned}$$

Thus,

$$\begin{split} \frac{2}{n(n+1)}\sum\_{i=1}^{n}\sum\_{j=i}^{n}a\_{i}^{t}a\_{j}^{t} &= \left\langle s\_{\frac{2}{n(n+1)}\sum\_{i=1}^{n}\sum\_{j=i}^{n}\theta\_{i}^{s}\theta\_{j}^{t'}}\left(\left(1-\left(\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-u\_{i}^{sq}u\_{j}^{tq}\right)\right)^{\frac{2}{n(n+1)}}\right)^{1/q}\right)^{1/q}\right\rangle \\ &\left\langle \prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\left(1-\eta\_{i}^{q}\right)^{s}\left(1-\eta\_{j}^{q}\right)^{t}\right)\right\rangle^{\frac{2}{n(n+1)}}\left(\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\left(1-\eta\_{i}^{q}\right)^{s}\left(1-\upsilon\_{j}^{q}\right)^{t}\right)\right)^{\frac{2}{n(n+1)}}\right)\rangle. \end{split}$$

So,

$$\begin{split} q-RPLHM^{s,t}(a\_1, a\_2, \ldots, a\_n) &= \left(\frac{2}{\pi(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n a\_i^s a\_j^t\right)^{1/(s+t)} \\ = \left\langle s\_{\left(\frac{2}{\pi(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n \theta\_i^s \theta\_j^t\right)} \left( \left(1 - \prod\_{i=1}^n \prod\_{j=1}^n \left(1 - u\_i^{sq} u\_j^{tq}\right)^{\frac{2}{\pi(n+1)}}\right)^{1/q(s+t)} \right. \\ \left. \left(1 - \left(1 - \prod\_{i=1}^n \prod\_{j=1}^n \left(1 - \left(1 - \eta\_i^q\right)^s \left(1 - \eta\_j^q\right)^t\right)^{\frac{2q}{\pi(n+1)}}\right)^{1/(s+t)}\right)^{1/q} \right)^{1/q} \\ \left\langle 1 - \left(1 - \prod\_{i=1}^n \prod\_{j=1}^n \left(1 - \left(1 - v\_i^q\right)^s \left(1 - v\_j^q\right)^t\right)^{\frac{2q}{\pi(n+1)}}\right)^{1/(s+t)} \right\rangle^{1/q} \right\rangle \end{split}$$


> In addition, the *q*-RPLHM operator has the following properties.

**Theorem 2 (Monotonicity).** *Let αi and βi* (*i* = 1, 2, ..., *n*) *be two collections of q-RPLNs, if αi* ≤ *βi for all i* = 1, 2, ··· , *n, then*

$$q - RPLHM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le q - RPLHM^{s,t}(\beta\_1, \beta\_2, \dots, \beta\_n). \tag{11}$$

**Proof.** As *αi* = *α* for all *i*, we can obtain

Since *αi* ≤ *βi* and *αj* ≤ *βj* for *i* = 1, 2, ··· , *n* and *j* = *i*, *i* + 1, ··· , *n*, we have *αsi αtj* ≤ *βsi βtj*. Then

$$\frac{2}{n(n+1)}\sum\_{i=1}^{n}\sum\_{j=i}^{n}\alpha\_{i}^{s}\alpha\_{j}^{t} \leq \frac{2}{n(n+1)}\sum\_{i=1}^{n}\sum\_{j=i}^{n}\beta\_{i}^{s}\beta\_{j}^{t}.$$

So,

$$\left(\frac{2}{n(n+1)}\sum\_{i=1}^{n}\sum\_{j=i}^{n}a\_i^s a\_j^t\right)^{1/(s+t)} \le \left(\frac{2}{n(n+1)}\sum\_{i=1}^{n}\sum\_{j=i}^{n}\beta\_i^s \beta\_j^t\right)^{1/(s+t)}$$

i.e.,

$$q - RPLHM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_{\mathfrak{n}}) \le q - RPLHM^{s,t}(\beta\_1, \beta\_2, \dots, \beta\_{\mathfrak{n}}).$$


$$q - RPLHM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) = \mathfrak{a}.\tag{12}$$

**Proof.** Since *αi* = *α*, for all *i*, we have

$$q - RPLHM^{s,t}(\alpha\_1, \alpha\_2, \dots, \alpha\_n) = \left(\frac{2}{n(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n \alpha\_i^s a\_j^t\right)^{1/(s+t)}$$

$$= \left(\frac{2}{n(n+1)} \sum\_{i=1}^n \sum\_{j=i}^n \alpha\_i^s a\_j^t\right)^{1/(s+t)} = \left(a^{s+t}\right)^{1/(s+t)} = a.$$


**Theorem 4 (Boundedness).** *The q-RPLHM operator lies between the max and min operators*

$$\min(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_{\mathfrak{n}}) \le q - RPLHM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_{\mathfrak{n}}) \le \max(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_{\mathfrak{n}}).\tag{13}$$

**Proof.** Let *a* = min(*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>n*), *b* = max(*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>n*) according to Theorem 2, we have

$$q - RPLHM^{s,t}(a, a, \ldots, a) \le q - RPLHM^{s,t}(a\_1, a\_2, \ldots, a\_n) \le q - RPLHM^{s,t}(b, b, \ldots, b).$$

$$\text{Further, } q - RPLHM^{s,t}(a, a, \ldots, a) = a \text{ and } q - RPLHM^{s,t}(b, b, \ldots, b) = b.$$

$$\text{So,}$$

$$a \le q - RPLHM^{s,t}(a\_1, a\_2, \ldots, a\_n) \le b,$$

i.e.,

$$
\min(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le q - RPLHM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le \max(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n).
$$


The parameters *s* and *t* play a very important role in the aggregated results. In the followings, we discuss some special cases of the *q*-RPLHM operator with respect to the parameters *s* and *t.*

**Case 1:** When *t* → 0, then the *q*-RPLHM operator reduces to the followings,

*q* − *RPLHMs*,<sup>0</sup>(*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>n*) = lim*t*→0*s*( 2 *n*(*n*+<sup>1</sup>) *n*∑*i*=1 *n*∑*j*=*i θsi θtj*)1/(*s*+*<sup>t</sup>*) , ⎛⎜⎜⎝⎛⎝1 − *n*∏*i*=1 *n*∏*j*=*i*1 − *usqi utqj* 2 *n*(*n*+<sup>1</sup>) ⎞⎠1/*q*(*s*+*t*), ⎛⎜⎜⎝1 − ⎛⎜⎝1 − *n*∏*i*=1 *n*∏*j*=*i*51 − 1 − *ηqi <sup>s</sup>*1 − *ηqj t*6 2*q n*(*n*+<sup>1</sup>) ⎞⎟⎠1/(*s*+*<sup>t</sup>*)⎞⎟⎟⎠1/*q*, ⎛⎜⎜⎝1 − ⎛⎜⎝1 − *n*∏*i*=1 *n*∏*j*=*i*51 − 1 − *vqi <sup>s</sup>*1 − *vqj t*6 2*q n*(*n*+<sup>1</sup>) ⎞⎟⎠1/(*s*+*<sup>t</sup>*)⎞⎟⎟⎠1/*q*⎞⎟⎟⎟⎠ = *s*( 2 *n*(*n*+<sup>1</sup>) *n*∑*i*=1 (*n*+1−*<sup>i</sup>*)*θsi* )1/*s*,⎛⎜⎝1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − *usqi <sup>n</sup>*+1−*<sup>i</sup>*6 2 *n*(*n*+<sup>1</sup>) 1/*qs* , ⎛⎜⎝1 − ⎛⎝1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − 1 − *ηqi <sup>s</sup><sup>n</sup>*+1−*<sup>i</sup>*6 2*q n*(*n*+<sup>1</sup>) ⎞⎠1/*s*⎞⎟⎠1/*q*, ⎛⎜⎝1 − ⎛⎝1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − 1 − *vqi <sup>s</sup><sup>n</sup>*+1−*<sup>i</sup>*6 2*q n*(*n*+<sup>1</sup>) ⎞⎠1/*s*⎞⎟⎠1/*q*⎞⎟⎟⎠. (14)

which is a *q*-rung picture linguistic generalized linear descending weighted mean operator. Evidently, it is equivalent to weight the information -*αs*1, *<sup>α</sup>s*2, ..., *αsn*. with (*<sup>n</sup>*, *n* − 1, ..., <sup>1</sup>).

**Case 2**: When *s* → 0, then the *q*-RPLHM operator reduces to the followings,

*q* − *RPLHM*0,*<sup>t</sup>*(*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>n*) = lim *s*→<sup>0</sup>*<sup>s</sup>*( 2 *n*(*n*+<sup>1</sup>) *n*∑*i*=1 *n*∑*j*=*i θsi θtj*)1/(*s*+*<sup>t</sup>*) , ⎛⎜⎜⎝⎛⎝1 − *n*∏*i*=1 *n*∏*j*=*i*1 − *usqi utqj* 2 *n*(*n*+<sup>1</sup>) ⎞⎠1/*q*(*s*+*t*) , ⎛⎜⎜⎝1 − ⎛⎜⎝1 − *n*∏*i*=1 *n*∏*j*=*i*51 − 1 − *vqi <sup>s</sup>*1 − *vqj t*6 2*q n*(*n*+<sup>1</sup>) ⎞⎟⎠1/(*s*+*<sup>t</sup>*)⎞⎟⎟⎠1/*q*, ⎛⎜⎜⎝1 − ⎛⎜⎝1 − *n*∏*i*=1 *n*∏*j*=*i*51 − 1 − *vqi <sup>s</sup>*1 − *vqj t*6 2*q n*(*n*+<sup>1</sup>) ⎞⎟⎠1/(*s*+*<sup>t</sup>*)⎞⎟⎟⎠1/*q*⎞⎟⎟⎟⎠ = *s*( 2 *n*(*n*+<sup>1</sup>) *n*∑*i*=1 *iθti*)1/*<sup>t</sup>* , ⎛⎝1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − *utqi i*6 2 *n*(*n*+<sup>1</sup>) 1/*qt*, ⎛⎜⎜⎝1 − ⎛⎜⎝1 − *n*∏*i*=151 − 1 − *vqi t*6*i* 2*q n*(*n*+<sup>1</sup>) ⎞⎟⎠1/*t*⎞⎟⎟⎠1/*q*, ⎛⎜⎜⎝1 − ⎛⎜⎝1 − *n*∏*i*=151 − 1 − *vqi t*6*i* 2*q n*(*n*+<sup>1</sup>) ⎞⎟⎠1/*t*⎞⎟⎟⎠1/*q*⎞⎟⎟⎟⎠ (15)

which is a *q*-rung picture linguistic generalized linear ascending weighted mean operator. Obviously, it is equivalent to weight the information -*αt*1, *αt*2, ..., *αtn*. with (1, 2, ..., *<sup>n</sup>*), i.e., when *t* → 0 or *s* → 0, the *q*-RPLHM operator has the linear weighted function for input data.

**Case 3:** When *s* = *t* = 1, then the *q*-RPLHM operator reduces to the followings,

$$q - RPLHM^{1,1}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) = \left\langle s\_{\left(\frac{2}{\pi(n+1)} \sum\_{i=1}^n \frac{\pi}{\ell} \theta\_i \theta\_i\right)}^{1/(s+t)} \right\rangle^{1/(s+t)},$$

$$\left( \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=1}^n \left( 1 - \left( u\_i u\_j \right)^q \right) \right)^{\frac{2}{\pi(n+1)}} \right)^{1/2q} \right)^{1/2q},$$

$$\left( 1 - \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=1}^n \left( 1 - \left( 1 - \eta\_i^q \right) \left( 1 - \eta\_j^q \right) \right)^{\frac{2q}{\pi(n+1)}} \right)^{1/2} \right)^{1/q},$$

$$\left( 1 - \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=1}^n \left( 1 - \left( 1 - \upsilon\_i^q \right) \left( 1 - \upsilon\_j^q \right) \right)^{\frac{2q}{\pi(n+1)}} \right)^{1/2} \right)^{1/q} \right).$$

which is a *q*-rung picture linguistic line Heronian mean operator.

**Case 4:** When *s* = *t* = 1/2, then the *q*-RPLHM operator reduces to the followings

$$\begin{split} &q - RPLHM^{\frac{1}{2},\frac{1}{2}}(\boldsymbol{u}\_{1},\boldsymbol{u}\_{2},\ldots,\boldsymbol{u}\_{n}) = \left\langle s\_{\frac{2}{\pi(n+1)}\sum\_{i=1}^{n}\sum\_{j=1}^{n}\sqrt{\pi\_{i}^{q}\boldsymbol{\rho}\_{j}^{q}}}, \left( \left(1 - \left(\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1 - \sqrt{\boldsymbol{u}\_{i}^{q}\boldsymbol{u}\_{j}^{q}}\right)\right)^{\frac{2}{\pi(n+1)}}\right)^{1/q} \right. \\ &\left. + \left(\prod\_{i=1}^{n}\prod\_{j=1}^{n}\left(1 - \sqrt{\left(1 - \boldsymbol{\eta}\_{i}^{q}\right)\left(1 - \boldsymbol{\eta}\_{j}^{q}\right)}\right)\right)^{\frac{2}{\pi(n+1)}}, \left(\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1 - \sqrt{\left(1 - \boldsymbol{\eta}\_{i}^{q}\right)\left(1 - \boldsymbol{\eta}\_{j}^{q}\right)}\right)\right)^{\frac{2}{\pi(n+1)}} \right), \end{split} \tag{17}$$

which is a *q*-rung picture linguistic basic Heronian mean operator. **Case 5:** When *q* = 2, then the *q*-RPLHM operator reduces to the followings,

$$\begin{split} q-RPLHM^{s,t}(a\_1, a\_2, \ldots, a\_n) &= \left\langle s \right| \\ & \left( \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=i}^n \left( 1 - \nu\_i^{2s} a\_j^{2t} \right) \right)^{\frac{2}{n(s+t)}} \right)^{1/2(s+t)} \right)^{1/2(s+t)} \\ & \left( 1 - \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=i}^n \left( 1 - \left( 1 - \eta\_i^2 \right)^s \left( 1 - \eta\_j^2 \right)^t \right) \right)^{\frac{4}{n(s+t)}} \right)^{1/(s+t)} \right)^{1/2} \\ & \left( 1 - \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=i}^n \left( 1 - \left( 1 - \upsilon\_i^2 \right)^s \left( 1 - \upsilon\_j^2 \right)^t \right) \right)^{\frac{4}{n(s+t)}} \right)^{1/(s+t)} \right)^{1/2} \end{split} \tag{18}$$

which is the Pythagorean picture linguistic Heronian mean operator.

**Case 6:** When *q* = 1, then the *q*-ROLHM operator reduces to the followings,

$$q-RPLHM^{s,t}(\boldsymbol{a}\_1, \boldsymbol{a}\_2, \ldots, \boldsymbol{a}\_n) = \left\langle s \bigg| \begin{pmatrix} s \\ \frac{2}{\pi(n+1)} \prod\_{i=1}^n \prod\_{j=1}^n \theta\_i^j \boldsymbol{a}\_j^\dagger \end{pmatrix} \right\rangle^{\frac{1}{n+1}},$$

$$\left( \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=1}^n \left( 1 - \boldsymbol{u}\_i^\varepsilon \boldsymbol{u}\_j^\dagger \right) \right)^{\frac{1}{n+1}} \right)^{\frac{1}{n+1}} \right)^{\frac{1}{n+1}},$$

$$1 - \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=1}^n \left( 1 - \left( 1 - \eta\_i \right)^s \left( 1 - \eta\_j \right)^t \right) \right)^{\frac{2}{n(n+1)}} \right)^{\frac{1}{n+1}},$$

$$1 - \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=1}^n \left( 1 - \left( 1 - \boldsymbol{v}\_i \right)^s \left( 1 - \boldsymbol{v}\_j \right)^t \right) \right)^{\frac{2}{n(n+1)}} \right)^{\frac{1}{n+1}} \right),$$

which is the picture linguistic Heronian mean operator.

#### *3.2. The q-Rung Picture Linguistic Weighted Heronian Mean (q-RPLWHM) Operator*

It is noted that the proposed *q*-RPLHM operator does not consider the self-importance of the aggregated arguments. Therefore, we put forward the weighted Heronian mean for *q*-RPLNs, which also considers the weights of aggregated arguments.

**Definition 14.** *Let <sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, and s*, *t* > 0*, w* = (*<sup>w</sup>*1, *w*2, ..., *wn*)*<sup>T</sup> be the weight vector, satisfying wi* ∈ [0, 1] *and* ∑*ni*=<sup>1</sup> *wi* = 1*. If*

$$q - RPLWMM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_{\mathfrak{n}}) = \left(\frac{2}{n(n+1)} \sum\_{i=1}^{n} \sum\_{j=i}^{n} (nw\_i\mathfrak{a}\_i)^s (nw\_j\mathfrak{a}\_j)^t\right)^{1/(s+t)},\tag{20}$$

*then q* − *RPLWHMs*,*<sup>t</sup>*(*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>n*) *is called the q-RPLWHM.*

According to the operations for *q*-RPLNs, the following theorems can be obtained. **Theorem 5.** *Let <sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, w* = (*<sup>w</sup>*1, *w*2, ..., *wn*)*<sup>T</sup> be the weight vector, satisfying wi* ∈ [0, 1] *and* ∑*ni*=<sup>1</sup> *wi* = 1*, then the aggregated value by using q-RPLWHM is also a q-RPLN and*

$$q-RPLWHM^{q,t}(a\_1, a\_2, \ldots, a\_n) = \left\langle s\_{\left(\frac{2}{n(n+1)}\sum\_{i=1}^n \frac{n}{n} (wn\_i\theta\_i)^\ast (wn\_j\theta\_j)^\ast\right)^{1/(s+t)}}\right\rangle$$

$$\left\langle \left(1-\prod\_{i=1}^n \prod\_{j=i}^n \left(1-\left(1-\left(1-u\_i^q\right)^{\frac{\mathfrak{m}\mathfrak{q}}{n(n+1)}}\right)^{\frac{2}{n(n+1)}} \left(1-\left(1-u\_j^q\right)^{\frac{\mathfrak{m}\mathfrak{q}}{n(n+1)}}\right)^{\frac{2}{n(n+1)}}\right)\right)^{1/(s+t)q}\right\rangle$$

$$\left(1-\left(1-\prod\_{i=1}^n \prod\_{j=i}^n \left(1-\left(1-\eta\_i^{\max;q}\right)^s \left(1-\eta\_j^{\max;q}\right)^t\right)^{\frac{2}{n(n+1)}}\right)^{1/(s+t)}\right)^{1/q},\tag{21}$$

$$\left(1-\left(1-\prod\_{i=1}^n \prod\_{j=i}^n \left(1-\left(1-\upsilon\_i^{\max;q}\right)^s \left(1-\upsilon\_j^{\max;q}\right)^t\right)^{\frac{2}{n(n+1)}}\right)^{1/(s+t)}\right)^{1/(s+t)}\right)^{1/q}.$$

The proof of Theorem 5 is similar to that of Theorem 1, which is omitted here. Similarly, *q*-RPLWHM has the following properties.

**Theorem 6 (Monotonicity).** *Let αi and βi*(*<sup>i</sup>* = 1, 2, ..., *n*) *be two collections of q-RPLNs, if αi* ≤ *βi for all i, then*

$$q - RPLNHM^{s,t}(a\_1, a\_2, \dots, a\_n) \le q - RPLNHHM^{s,t}(\beta\_1, \beta\_2, \dots, \beta\_n). \tag{22}$$

**Theorem 7 (Boundedness).** *The q-RPLWHM operator lies between the max and min operators*

$$\min(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le q - RPLWMHM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le \max(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n). \tag{23}$$

*3.3. The q-Rung Picture Linguistic Geometric Heronian Mean (q-RPLGHM) Operator*

**Definition 15.** *Let <sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, and s*, *t* > 0*. If*

$$(q - RPLGHM^{s,t}(a\_1, a\_2, \dots, a\_n) = \frac{1}{s+t} \prod\_{i=1}^n \prod\_{j=i}^n (sa\_i + tu\_j)^{\frac{2}{n(n+1)}},\tag{24}$$

*then q* − *RPLGHMs*,*<sup>t</sup> is called the q-rung picture linguistic geometric Heronian mean (q-RPLGHM) operator.*

Similarly, the following theorem can be obtained according to Definition 7.

**Theorem 8.** *Let <sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, then the aggregated value by using q-RPLGHM is also a q-RPLN and*

$$\begin{split} q-\text{RPLG}HM^{s,t}(\mathfrak{a}\_{1},\mathfrak{a}\_{2},\ldots,\mathfrak{a}\_{n}) &= \\ \left\langle s\_{\sum\_{i=1}^{1}\prod\_{j=1}^{n}\prod\_{i=1}^{n}(s\theta\_{i}+t\theta\_{j})^{\frac{2}{n(n+1)}}}, \left( \left(1-\left(1-\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\left(1-u\_{i}^{q}\right)^{s}\left(1-u\_{j}^{q}\right)^{t}\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{s+t}}\right)^{1/q} \right. \\ \left. \left(1-\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\nu\_{i}^{sq}\upsilon\_{j}^{tq}\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{(s+t)q}}, \left(1-\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\nu\_{i}^{sq}\upsilon\_{j}^{tq}\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{(s+t)q}} \right) \right\rangle. \end{split} (25)$$

The proof of Theorem 8 is similar to that of Theorem 1. In the following, we present some desirable properties of the *q*-RPLGHM operator.

**Theorem 9 (Idempotency).** *Let αi* = 8*<sup>s</sup>θi* ,(*μi*, *ηi*, *vi*)9(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, if all the q-RPLNs are equal,* i.e., *αi* = *α for all i, then*

$$q-RPLGHM^{s,l}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) = \mathfrak{a}.\tag{26}$$

The proof of Theorem 9 is similar to that of Theorem 2.

**Theorem 10 (Monotonicity).** *Let αi and βi*(*<sup>i</sup>* = 1, 2, ..., *n*) *be two collections of q-RPLNs, if αi* ≤ *βi for all i, then*

$$q - RPLGHM^{\varsigma, t}(a\_1, a\_2, \dots, a\_n) \le q - RPLGHM^{\varsigma, t}(\beta\_1, \beta\_2, \dots, \beta\_n). \tag{27}$$

The proof of Theorem 10 is similar to that of Theorem 3.

**Theorem 11 (Boundedness).** *Let αi* = 8*<sup>s</sup>θi*,(*μi*, *ηi*, *vi*)9(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, then*

$$\min(u\_1, u\_2, \ldots, u\_n) \le q - RPLGHM^{\varepsilon \beta}(u\_1, u\_2, \ldots, u\_n) \le \max(u\_1, u\_2, \ldots, u\_n). \tag{28}$$

The proof of Theorem 11 is similar to that of Theorem 4. In the followings, we discuss some special cases of the *q*-RPLGHM operator.

**Case 1:** When *t* → 0, then the *q*-RPLGHM operator reduces to the followings,

*q* − *RPLGHMs*,<sup>0</sup>(*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>n*) = lim*t*→0*s* 1*s*+*t n*∏*i*=1 *n*∏*<sup>j</sup>*=*<sup>i</sup>*(*sθi*+*tθj*) 2 *n*(*n*+<sup>1</sup>) , ⎛⎜⎜⎝⎛⎜⎝1 − 1 − *n*∏*i*=1 *n*∏*j*=*i*51 − 1 − *uqi <sup>s</sup>*1 − *uqj t*6 2 *n*(*n*+<sup>1</sup>) 1*s*+*t* ⎞⎟⎠1/*q* , 1 − *n*∏*i*=1 *n*∏*j*=*i*1 − *ηsqi ηtqj* 2 *n*(*n*+<sup>1</sup>) 1 (*s*+*<sup>t</sup>*)*q* , 1 − *n*∏*i*=1 *n*∏*j*=*i*1 − *vsqi vtqj* 2 *n*(*n*+<sup>1</sup>) 1 (*s*+*<sup>t</sup>*)*q* ⎞⎠ =*s* 1*s* ( *n*∏ *i*=1 (*sθi*)*<sup>n</sup>*+1−*<sup>i</sup>*) 2 *n*(*n*+<sup>1</sup>) , ⎛⎜⎜⎝⎛⎜⎝1 − 1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − 1 − *uqi <sup>s</sup><sup>n</sup>*+1−*<sup>i</sup>*6 2 *n*(*n*+<sup>1</sup>) 1*s* ⎞⎟⎠1/*q*, 1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − *ηsqi <sup>n</sup>*+1−*<sup>i</sup>*6 2 *n*(*n*+<sup>1</sup>) 1 *sq* , 1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − *vsqi <sup>n</sup>*+1−*<sup>i</sup>*6 2 *n*(*n*+<sup>1</sup>) 1 *sq* ⎞⎟⎠, (29)

which is a *q*-rung picture linguistic generalized geometric linear descending weighted mean operator. **Case2:**When *s*→ 0,thenthe*q*-RPLGHMoperatorreducestothefollowings,

*q* − *RPLGHM*0,*<sup>t</sup>*(*<sup>α</sup>*1, *α*2, ..., *<sup>α</sup>n*) = lim *s*→0*s* 1*s*+*t n*∏*i*=1 *n*∏*<sup>j</sup>*=*<sup>i</sup>*(*sθi*+*tθj*) 2 *n*(*n*+<sup>1</sup>) , ⎛⎜⎜⎝⎛⎜⎝1 − 1 − *n*∏*i*=1 *n*∏*j*=*i*51 − 1 − *uqi <sup>s</sup>*1 − *uqj t*6 2 *n*(*n*+<sup>1</sup>) 1*s*+*t* ⎞⎟⎠1/*q* , 1 − *n*∏*i*=1 *n*∏*j*=*i*1 − *ηsqi ηtqj* 2 *n*(*n*+<sup>1</sup>) 1 (*s*+*<sup>t</sup>*)*q* , 1 − *n*∏*i*=1 *n*∏*j*=*i*1 − *vsqi vtqj* 2 *n*(*n*+<sup>1</sup>) 1 (*s*+*<sup>t</sup>*)*q* ⎞⎠ = *s* 1*t* ( *n*∏ *i*=1 (*tθj*)*<sup>i</sup>*) 2 *n*(*n*+<sup>1</sup>) , ⎛⎜⎜⎜⎝⎛⎜⎜⎝1 − ⎛⎝1 − *n*∏*i*=151 − 1 − *uqj t*6*i* 2 *n*(*n*+<sup>1</sup>) ⎞⎠1*t* ⎞⎟⎟⎠1/*q*, 1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − *ηtqj i*6 2 *n*(*n*+<sup>1</sup>) 1 *tq* , 1 − 5 *n*∏*<sup>i</sup>*=<sup>1</sup><sup>1</sup> − *vtqj i*6 2 *n*(*n*+<sup>1</sup>) 1 *tq* ⎞⎟⎠, (30)

which is a *q*-rung picture linguistic generalized geometric linear ascending weighted mean operator. **Case 3:** When *s* = *t* = 1, then the *q*-RPLGHM operator reduces to the followings,

$$q-RPLGHM^{1,1}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) = \left\langle s\_{\frac{1}{n}\prod\_{i=1}^n \prod\_{j=i}^n (\theta\_i + \theta\_j)}^{\frac{n}{n}} \right\rangle^{\prime}$$

$$\left( \left( 1 - \left( 1 - \prod\_{i=1}^n \prod\_{j=i}^n \left( 1 - \left( 1 - u\_i^q \right) \left( 1 - u\_j^q \right) \right)^{\frac{2}{n(n+1)}} \right)^{\frac{1}{2}} \right)^{1/q} \right)^{1/q}, \tag{31}$$

$$\left( 1 - \left( \prod\_{i=1}^n \prod\_{j=i}^n \left( 1 - \eta\_i^q \eta\_j^q \right) \right)^{\frac{2}{n(n+1)}} \right)^{\frac{1}{2q}}, \left( 1 - \left( \prod\_{i=1}^n \prod\_{j=i}^n \left( 1 - v\_i^q v\_j^q \right) \right)^{\frac{2}{n(n+1)}} \right)^{\frac{1}{2q}} \right),$$

which is a *q*-rung picture linguistic geometric line Heronian mean operator.

**Case 4:** When *s* = *t* = 1/2, then the *q*-RPLGHM operator reduces to the followings,

$$\begin{split} & (q-RPLLGMA^{\frac{1}{2},\frac{1}{2}}(a\_{1},a\_{2},...,a\_{n}) - \left\langle s\_{n} \begin{array}{c} \\ \left(\prod\_{i=1}^{n} \prod\_{j=1}^{n} (4\theta\_{i}+\frac{1}{2}\theta\_{j})\right)^{\frac{2}{\pi(i+1)}}, \left(\left(\prod\_{i=1}^{n} \prod\_{j=1}^{n} \left(1-\sqrt{\left(1-u\_{i}^{q}\right)\left(1-u\_{j}^{q}\right)}\right)\right)^{\frac{2}{\pi(n+1)}}\right)^{1/q}, \\ & \left(1-\prod\_{i=1}^{n} \prod\_{j=1}^{n} \left(1-\sqrt{\eta\_{i}^{q}\eta\_{j}^{q}}\right)^{\frac{2}{\pi(i+1)}}, \left(1-\prod\_{i=1}^{n} \prod\_{j=1}^{n} \left(1-\sqrt{\eta\_{i}^{q}\eta\_{j}^{q}}\right)^{\frac{2}{\pi(i+1)}}\right)^{\frac{2}{\pi(i+1)}} \right) \end{split} \tag{32}$$

which is a *q*-rung picture linguistic basic geometric Heronian mean operator.

**Case 5:** When *q* = 2, then the *q*-RPLGHM operator reduces to the followings

$$q-RPLGHM^{s,t}(a\_1, a\_2, \dots, a\_n) = \left\langle s \begin{array}{l} \\ \displaystyle s \\ \displaystyle \prod\_{i=1}^{n} \prod\_{j=1}^{n} \left(s\theta\_i + t\theta\_j\right)^{\frac{2}{n(n+1)}} \\ \displaystyle \left(1-\left(1-\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\left(1-u\_i^2\right)^{s}\left(1-u\_j^2\right)^{t}\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{s+t}}\right)^{1/2} \\ \end{array} \right\rangle^{1/2}$$
 
$$\left\langle 1-\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\eta\_i^{2s}\eta\_j^{2t}\right)^{\frac{2}{n(n+1)}}, \left(1-\prod\_{i=1}^{n}\prod\_{j=i}^{n}\left(1-\eta\_i^{2s}\eta\_j^{2t}\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{2(n+1)}}\right\rangle\rangle.$$

which is the Pythagorean picture linguistic geometric Heronian mean operator.

**Case 6:** When *q* = 1, then the *q*-RPLGHM operator reduces to the followings

$$q-RPLGHM^{sf}(a\_1, a\_2, \dots, a\_n) = \begin{pmatrix} s \\ s \end{pmatrix}\_{\frac{1}{s+1}} \begin{pmatrix} \prod\_{i=1}^n \left(s\theta\_i + t\theta\_j\right)^{\frac{2}{n(n+1)}} \end{pmatrix}^{'} $$

$$= \left(1 - \left(1 - \left(\prod\_{i=1}^n \prod\_{j=1}^n \left(1 - \left(1 - u\_i\right)^s \left(1 - u\_j\right)^t\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{n+1}}\right)^{\frac{1}{s+1}}\right)^{\frac{1}{s+1}} \tag{34}$$

$$= \left(1 - \left(\prod\_{i=1}^n \prod\_{j=i}^n \left(1 - \eta\_i^s \eta\_j^t\right)\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{s+1}}, \left(1 - \left(\prod\_{i=1}^n \prod\_{j=i}^n \left(1 - \nu\_i^s \upsilon\_j^t\right)\right)^{\frac{2}{n(n+1)}}\right)^{\frac{1}{s+1}}\right),$$

which is the picture linguistic geometric Heronian mean operator.

*3.4. The q-Rung Picture Linguistic Weighted Geometric Heronian Mean (q-RPLWGHM) Operator*

**Definition 16.** *Let <sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, and s*, *t* > 0*, w* = (*<sup>w</sup>*1, *w*2, ..., *wn*)*<sup>T</sup> be the weight vector, satisfying wi* ∈ [0, 1] *and* ∑*ni*=<sup>1</sup>*wi* = 1*. If*

$$q - RPLNGHM^{s,t}(a\_1, a\_2, \dots, a\_n) = \frac{1}{s+t} \prod\_{i=1}^{n} \prod\_{j=i}^{n} \left( s a\_i^{nw\_i} + t a\_j^{nw\_j} \right)^{\frac{2}{n(n+1)}} \tag{35}$$

*then q* − *RPLWGHMs*,*<sup>t</sup> is called the q-RPLWGHM.*

Additionally, *q*- RPLWGHM has the following theorem.

**Theorem 12.** *Let <sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *n*) *be a collection of q-RPLNs, w* = (*<sup>w</sup>*1, *w*2, ..., *wn*)*<sup>T</sup> be the weight vector, satisfying wi* ∈ [0, 1] *and* ∑*ni*=<sup>1</sup>*wi* = 1*, then the aggregated value by using q-RPLWGHM is also a q-RPLN and*

$$q - RPLWGHM^{s1}(a\_1, a\_2, \dots, a\_n) = \left\langle s \prod\_{\substack{i=1 \ i \neq 1 \\ i=1}}^{s} \prod\_{j=i}^{n} \left(1 - \left(1 - u\_i^{\text{un}, q}\right)^s \left(1 - u\_j^{\text{un}, q}\right)^t\right)^{\frac{2}{\pi(n+1)}}\right\rangle^{1/q},$$

$$\left(\left(1 - \prod\_{i=1}^{n} \prod\_{j=i}^{n} \left(1 - \left(1 - \left(1 - \eta\_i^q\right)^{\text{unv}}\right)^s \left(1 - \left(1 - \eta\_j^q\right)^{\text{unv}}\right)^t\right)^{\frac{2}{\pi(n+1)}}\right)^{1/q}\right)^{1/q},$$

$$\left(1 - \prod\_{i=1}^{n} \prod\_{j=i}^{n} \left(1 - \left(1 - \left(1 - \eta\_i^q\right)^{\text{unv}}\right)^s \left(1 - \left(1 - \eta\_j^q\right)^{\text{unv}}\right)^t\right)^{\frac{2}{\pi(n+1)}}\right)^{\frac{1}{(s+1)q}},\tag{36}$$

$$\left(1 - \prod\_{i=1}^{n} \prod\_{j=i}^{n} \left(1 - \left(1 - \left(1 - \eta\_i^q\right)^{\text{unv}}\right)^s \left(1 - \left(1 - \eta\_j^q\right)^{\text{unv}}\right)^t\right)^{\frac{2}{(s+1)q}}\right)^{\frac{1}{(s+1)q}}\right),$$

The proof of Theorem 12 is similar to that of Theorem 1, which is omitted here. In addition, the *q*-RPLWGHM operator has the following properties.

**Theorem 13 (Monotonicity).** *Let αi and βi*(*<sup>i</sup>* = 1, 2, ..., *n*) *be two collections of q-RPLNs, if αi* ≤ *βi for all i, then*

$$q - RPLNGHM^{s,l}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le q - RPLNGHM^{s,l}(\beta\_1, \beta\_2, \dots, \beta\_n). \tag{37}$$

**Theorem 14 (Boundedness).** *The q-RPLWGHM operator lies between the max and min operators*

$$\min(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le q - RPLNGHM^{s,t}(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n) \le \max(\mathfrak{a}\_1, \mathfrak{a}\_2, \dots, \mathfrak{a}\_n). \tag{38}$$

#### **4. A Novel Approach to MAGDM Based on the Proposed Operators**

In this section, we shall apply the proposed aggregation operators to solving MAGDM problems in *q*-rung picture linguistic environment. Considering a MAGDM process in which the attribute value take the form of *q*-RPLNs.: let *A* = {*<sup>A</sup>*1, *A*2, ..., *Am*} be a set of all alternatives, and *C* = {*<sup>c</sup>*1, *c*2, ..., *cn*} be a set of attributes with the weight vector being *w* = (*<sup>w</sup>*1, *w*2, ..., *wn*)*<sup>T</sup>*, satisfying *wi* ∈ [0, 1] and ∑*ni*=<sup>1</sup> *wi* = 1. A set of decision-makers *Dk* are organized to make the assessment for every attribute *cj*(*j* = 1, 2, ..., *n*) of all alternatives by q-RPLNs *αkij* = *skθij* , *ukij*, *ηkij*, *<sup>v</sup>kij*, and *λ* = (*<sup>λ</sup>*1, *λ*2,..., *<sup>λ</sup>k*) is the weight vector of decision-makers *Dk*(*k* = 1, 2, ..., *p*). Therefore, the q-rung picture linguistic decision matrices can be denoted by *A<sup>k</sup>* = *αkijm*×*n*. The main steps to solve MAGDM problems based on the proposed operators are given as follows.

Step 1. Standardize the original decision matrices. There are two types of attributes, benefit and cost attributes. Therefore, the original decision matrix should be normalized by

$$\boldsymbol{\alpha}\_{ij}^{k} = \begin{cases} \left< \boldsymbol{s}\_{\boldsymbol{\theta}\_{ij}^{k}}^{k} \left( \boldsymbol{u}\_{ij}^{k}, \boldsymbol{\eta}\_{ij}^{k}, \boldsymbol{\upsilon}\_{ij}^{k} \right) \right> & y\_{j} \in I\_{1} \\\left< \boldsymbol{s}\_{\boldsymbol{\theta}\_{ij}^{k}}^{k} \left( \boldsymbol{\upsilon}\_{ij}^{k}, \boldsymbol{\eta}\_{ij}^{k}, \boldsymbol{\mu}\_{ij}^{k} \right) \right> & y\_{j} \in I\_{2} \end{cases} \tag{39}$$

where *I*1 and *I*2 represent the benefit attributes and cost attributes respectively.

Step 2. Utilize the *q*-RPLWHM operator

$$\mathfrak{a}\_{ij} = q - RPLNHMM^{s,t} \left( a\_{ij}^1, a\_{ij}^2, \dots, a\_{ij}^p \right),\tag{40}$$

or the *q*-RPLWGHM operator

$$\alpha\_{i\bar{j}} = q - RPLNGHM^{s,t} \left( a\_{i\bar{j}\prime}^1 a\_{i\bar{j}\prime}^2 \dots a\_{i\bar{j}}^p \right),\tag{41}$$

to aggregate all the decision matrices *A<sup>k</sup>*(*k* = 1, 2, . . . *p*) into a collective decision matrix *A* = -*<sup>α</sup>ij*.*m*×*n*. Step 3. Utilize the *q*-RPLWHM operator

$$\mathfrak{a}\_{i} = q - RPLWhMM^{s,t}(\mathfrak{a}\_{i1}, \mathfrak{a}\_{i2}, \dots, \mathfrak{a}\_{in})\_{\prime} \tag{42}$$

or the *q*-RPLWGHM operator

$$
\pi\_i = q - RPLNGHIM^{s,l}(\mathfrak{a}\_{i1'}, \mathfrak{a}\_{i2'}, \dots, \mathfrak{a}\_{in}), \tag{43}
$$

to aggregate the assessments *<sup>α</sup>ij*(*j* = 1, 2, ..., *n*) for each *Ai* so that the overall preference values *<sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *m*) of alternatives can be obtained.

Step 4. Calculate the score functions of the overall values *<sup>α</sup>i*(*<sup>i</sup>* = 1, 2, ..., *<sup>m</sup>*).

Step 5. Rank all alternatives according to the score functions of the corresponding overall values and select the best one(s).

Step 6. End.

## **5. Numerical Instance**

In this part, to validate the proposed method, we provide a numerical instance about choosing an Enterprise resource planning (ERP) system adopted from Liu and Zhang [41]. After primary evaluation, there are four possible systems provided by different companies remained on the candidates list and they are {*A*1, *A*2, *A*3, *A*4}. Four experts *Dk*(*k* = 1, 2, 3, 4) are invited to evaluate the candidates under four attributes, they are (1) technology *C*1; (2) strategic adaptability *C*2; (3) supplier's ability *C*3; (4) supplier's reputation *C*4. Weight vector of the four attributes is *w* = (0.25, 0.3, 0.25, 0.2)*<sup>T</sup>*. The decision-makers are required to use picture fuzzy linguistic numbers (PFLNs) on the basic of the linguistic term set *S* = {*<sup>s</sup>*0 = terrible, *s*1 = bad, *s*2 = poor, *s*3 = neutral, *s*4 = good, *s*5 = well, *s*6 = excellent} to express their preference information. Decision-makers' weight vector is *λ* = (0.3, 0.2, 0.2, 0.3)*<sup>T</sup>*. After evaluation, the individual picture fuzzy linguistic decision matrix *A<sup>k</sup>* = *αkij*<sup>4</sup>×<sup>4</sup> can be obtained, which are shown in Tables 1–4.

**Table 1.** Decision matrix *A*<sup>1</sup> provided by *D*1.


**Table 2.** Decision matrix *A*<sup>2</sup> provided by *D*2.



**Table 3.** Decision matrix *A*<sup>3</sup> provided by *D*3.

**Table 4.** Decision matrix *A*<sup>4</sup> provided by *D*4.


*5.1. The Decision-Making Process*

Step 1. As the four attributes are benefit types, the original decision matrices do not need normalization.

Step 2. Utilize Equation (40) to calculate the comprehensive value *<sup>α</sup>ij* of each attribute for every alternative. The collective decision matrix *A* = -*<sup>α</sup>ij*.<sup>4</sup>×<sup>4</sup>is shown in Table 5 (suppose *s* = *t* = 1, *q* = 3):

**Table 5.** Collective picture fuzzy linguistic decision matrix (by q-rung Picture Linguistic Weighted Geometric Heronian Mean (*q*-RPLWHM) operator).


Step 3. Utilize Equation (42) to obtain the overall values of each alternative, we can ge<sup>t</sup>

$$\begin{aligned} \mathfrak{a}\_1 &= \langle \mathfrak{s}\_{3.86\prime}(0.75, 0.27, 0.24) \rangle \,\mathfrak{a}\_2 = \langle \mathfrak{s}\_{2.53\prime}(0.63, 0.32, 0.29) \rangle \\ \mathfrak{a}\_3 &= \langle \mathfrak{s}\_{3.65\prime}(0.61, 0.51, 0.47) \rangle \,\mathfrak{a}\_4 = \langle \mathfrak{s}\_{3.10\prime}(0.57, 0.51, 0.47) \rangle . \end{aligned}$$

Step 4. Compute the score functions of the overall values, which are shown as follows:

$$S(\mathfrak{a}\_1) = 5.42, \text{ } S(\mathfrak{a}\_2) = 3.11, \text{ } S(\mathfrak{a}\_3) = 4.11, \text{ } S(\mathfrak{a}\_4) = 3.35.$$

Step 5. Then the rank of the four alternatives is obtained

$$A\_1 \succ A\_3 \succ A\_4 \succ A\_2.$$

Therefore, the optimal alternative is *A*1.

In step 2, if we utilize Equation (41) to aggregate the assessments, then we can derive the following collective decision matrix in Table 6 (suppose *s* = *t* = 1, *q* = 3).


**Table 6.** Collective picture fuzzy linguistic decision matrix (by *q*-RPLWGHM operator).

Then we utilize Equation (43) to obtain the following overall values of alternatives:

$$\begin{aligned} \mathfrak{a}\_1 &= \langle \mathfrak{s}\_{3.65}, (0.60, 0.44, 0.44) \rangle \,\mathfrak{a}\_2 = \langle \mathfrak{s}\_{2.36}, (0.49, 0.52, 0.52) \rangle \\ \mathfrak{a}\_3 &= \langle \mathfrak{s}\_{3.36}, (0.26, 0.75, 0.75) \rangle \,\mathfrak{a}\_4 = \langle \mathfrak{s}\_{2.87}, (0.32, 0.67, 0.66) \rangle \,\bar{\mathfrak{a}}\_3 \end{aligned}$$

In addition, we calculate the score functions of the overall assessments and we can ge<sup>t</sup>

$$S(a\_1) = 4.13, \ S(a\_2) = 2.31, \ S(a\_3) = 2.01, \ S(a\_4) = 2.14.$$

Therefore, the rank of the four alternatives is *A*1 *A*2 *A*4 *A*3 and the best alternative is *A*1.

#### *5.2. The Influence of the Parameters on the Results*

The parameters *q*, *s* and *t* play significant roles in the final ranking results. In the following, we shall investigate the influence of the parameters on the overall assessments of alternatives and the final ranking results. First, we discuss the effects of the parameter *q* on the ranking results (suppose *s* = *t* = 1). Details are presented in Figures 1 and 2.

**Figure 1.** Score values of the alternatives when *q* ∈ [1, <sup>10</sup>],*<sup>s</sup>* = *t* = 1 based on *q*-RPLWHM operator.

As seen in Figures 1 and 2, the score values of the overall assessments are different with different value of *q*, leading to different ranking results based the *q*-RFLWHM operator and the *q*-RFLWGHM operator. However, the best alternative is always *A*1. The decision-makers can choose the appropriate parameter value *q* according to their preferences. From Figure 1, we can find that when *q* ∈ [1, 1.71] the ranking order is *A*1 *A*3 *A*2 *A*4 and when *q* ∈ [1.71, 10] the ranking order is *A*1 *A*3 *A*4 *A*2 by the *q*-RFLWHM operator. In addition, from Figure 2 we know when *q* ∈ [1, 4.12] the ranking order is *A*1 *A*2 *A*4 *A*3; when *q* ∈ [4.12, 4.34] the ranking order is *A*1 *A*4 *A*2 *A*3; when *q* ∈ [4.34, 4.66] the ranking order is *A*1 *A*4 *A*3 *A*2, and when *q* ∈ [4.66, 10] the ranking order is *A*1 *A*3 *A*4 *A*2 by the *q*-RFLWGHM operator.

**Figure 2.** Score values of the alternatives when *q* ∈ [1, <sup>10</sup>],*<sup>s</sup>* = *t* = 1 using *q*-RPLWGHM operator.

In the followings, we investigate influence of the parameters *s* and *t* on the score functions and ranking orders respectively (suppose *q* = 3). Details are presented in Tables 7 and 8.


**Table 7.** Ranking orders by utilizing different values of *s* and *t* in the *q*-RPLWHM operator.

**Table 8.** Ranking orders by utilizing different values of *s* and *t* in the *q*-RPLWGHM operator.


As seen in Tables 7 and 8, when different values are assigned to the parameters *s* and *t*, different scores and corresponding ranking results can be obtained. However, the best alternative is always *A*1. Especially, in the *q*-RPLWHM operator, the increase of the parameters *s* and *t* leads to increase of the score functions, whereas a decrease of the score functions is witnessed using *q*-RPLWGHM operator. Furthermore, there is a difference in the ranking orders of *A*2, *A*3 and *A*4 when *s* → 0, *t* = 1 or *s* = 1, *t* → 0 for the linear weighting by *q*-RPLWHM or *q*-RPLWGHM operator. Therefore, the parameters

*s* and *t* can be also viewed a decision-makers' optimistic or pessimistic attitude to their assessments. This demonstrates the flexibility in the aggregation processes using the proposed operators.

## *5.3. Comparative Analysis*

To further demonstrate the merits and superiorities of the proposed methods, we conduct the following comparative analysis.

#### 5.3.1. Compared with the Method Proposed by Liu and Zhang [41]

We utilize Liu and Zhang's [41] method to solve the above problem and results can be found in Table 9. From Table 9, we can find out that the results by using Liu and Zhang's [41] method and the proposed method in this paper are quite different. The reasons can be explained as follows: (1) Our method is based on the HM, which considers the interrelationship among attribute values, whereas the method based Archimedean picture fuzzy linguistic weighted arithmetic averaging (A-PFLWAA) operator proposed by Liu and Zhang [41] can only provide the arithmetic weighting function. In other words, Liu and Zhang's [41] method assumes that attributes are independent. In most real decision-making problems, attributes are correlated so that the interrelationship among attributes should be taken into consideration. Therefore, our proposed method is more reasonable than Liu and Zhang's [41] method. (2) Liu and Zhang's [41] method is based on PFLS, which is only a special case of *q*-RPLS (when *q* = 1). Therefore, our method is more general, flexible and reasonable than that proposed by Liu and Zhang [41].


**Table 9.** Score values and ranking results using our methods and the method in Liu and Zhang [41].

5.3.2. Compared with the Methods Proposed by Wang et al. [47], Liu et al. [48], and Ju et al. [49]

To further demonstrate the effectiveness and validity of the proposed methods in this paper, we will deal with the problems in Wang et al. [47], Liu et al. [48], and Ju et al. [49] by using our methods respectively. Given that there is no method to aggregate *q*-rung picture linguistic information and there are various methods based on the intuitionistic linguistic numbers (ILNs), which are special cases of *q*-RPLNs when *q* equals to one and the neutral membership degree *η* equals to zero, we use the following three cases described by intuitionistic fuzzy numbers (ILNs) to verify our methods. For instance, when an ILN is *<sup>s</sup>*1, (0.6, 0.4), it can be transformed into a *q*-RPLN *<sup>s</sup>*1, (0.6, 0, 0.4). The score values and ranking results by different methods are shown in Tables 10–12.

**Table 10.** Score values and ranking results using our method and the method in Wang et al. [47].



**Table 11.** Score values and ranking results using our method and the method in Liu et al. [48].

**Table 12.** Score values and ranking results using our method and the method in Ju et al. [49].


From Tables 10–12, it is obvious to find that the ranking results produced by our method are little different to those produced by other methods. However, their optimal selections are the same, which can prove the effectiveness of our methods very well. As mentioned above, the *q*-RPLN contains more information than ILN, and it is a generalization of the ILN. Thus, our method based on *q*-RPLNs can be utilized in a wider range of environments.

Wang et al.'s [47] method is based on intuitionistic linguistic hybrid averaging (ILHA) operator, which cannot consider the interrelationship among attribute values. Because our proposed method can make up for this disadvantage, our method is more reasonable than Wang et al.'s [47] method.

Liu et al.'s [48] method is based on intuitionistic linguistic weighted Bonferroni mean (ILWBM) operator. It can cope with the interrelationship between augments, which is same as our method. However, as Yu and Wu [44] pointed out that HM has some advantages over BM, our method is better than Liu et al.'s [48] method.

Ju et al.'s [49] method is based on weighted intuitionistic linguistic Maclaurin symmetric mean (WILMSM) operator and when *k* = 2, the interrelationship between any two arguments can be considered, which is the same as our proposed method. However, our methods are based on the *q*-RPLWHM and *q*-RPLWGHM operators, which have two parameters (*s* and *t*). The prominent advantage of our methods is that we can control the degree of the interactions of attribute values that are emphasized. The increase of values of the parameters means the interactions of attribute values are more emphasized. Therefore, the decision-making committee can properly select the desirable alternative according to their interests and the actual needs by determining the values of parameters. Moreover, in the WILMSM operator proposed by Ju et al. [49], the balancing coefficient *n* is not considered, leading to some unreasonable results. In our proposed operators, the coefficient *n* is considered so that our methods are more reliable and reasonable.

From above analysis, we can find out that our proposed methods can be successfully applied to actual decision-making problems. Compared with other methods, our methods are more flexible and suitable for addressing MAGDM problems. The advantages and merits of the proposed methods can be concluded as followings. Firstly, the proposed methods are based on *q*-RPLSs. The prominent characteristic of *q*-RPLS is that it allows the sum and square sum of positive membership degree, neutral membership degree, and negative membership degree to be greater than one, providing more freedom for decision-makers to express their evaluations, and further leading to less information loss in the process of MAGDM. Secondly, considering the fact that decision-makers prefer to make qualitative

decisions due to lack of time and expertise, the proposed *q*-RPLSs not only express decision-makers' qualitative assessments, but also reflect decision-makers' quantitative ideas. Therefore, *q*-RPLSs are suitable and sufficient for modeling decision-makers' evaluations on alternatives. Thirdly, in most real decision-making problems, attributes are correlated, so that the interrelationship between attribute values should be taken into account when fusing them. Our method to MAGDM is based on *q*-RPLWHM or *q*-RPLWGHM operators, which consider the interrelationship between arguments. Therefore, our method can effectively actual MAGDM problems. In a word, the proposed method not only provides a new tool for decision-makers to express their assessments, but also effectively model the process of real MAGDM problems. Therefore, our method is more general, powerful and flexible than other methods.
