Next Article in Journal
Graph Cellular Automata with Relation-Based Neighbourhoods of Cells for Complex Systems Modelling: A Case of Traffic Simulation
Next Article in Special Issue
Fuzzy Techniques for Decision Making
Previous Article in Journal
A Novel String Grammar Unsupervised Possibilistic C-Medians Algorithm for Sign Language Translation Systems
Previous Article in Special Issue
New Applications of m-Polar Fuzzy Matroids
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Probabilistic Linguistic Power Aggregation Operators for Multi-Criteria Group Decision Making

1
School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China
2
Center for West African Studies, University of Electronic Science and Technology of China, Chengdu 610054, China
*
Author to whom correspondence should be addressed.
Symmetry 2017, 9(12), 320; https://doi.org/10.3390/sym9120320
Submission received: 4 November 2017 / Revised: 13 December 2017 / Accepted: 13 December 2017 / Published: 19 December 2017
(This article belongs to the Special Issue Fuzzy Techniques for Decision Making)

Abstract

:
As an effective aggregation tool, power average (PA) allows the input arguments being aggregated to support and reinforce each other, which provides more versatility in the information aggregation process. Under the probabilistic linguistic term environment, we deeply investigate the new power aggregation (PA) operators for fusing the probabilistic linguistic term sets (PLTSs). In this paper, we firstly develop the probabilistic linguistic power average (PLPA), the weighted probabilistic linguistic power average (WPLPA) operators, the probabilistic linguistic power geometric (PLPG) and the weighted probabilistic linguistic power geometric (WPLPG) operators. At the same time, we carefully analyze the properties of these new aggregation operators. With the aid of the WPLPA and WPLPG operators, we further design the approaches for the application of multi-criteria group decision-making (MCGDM) with PLTSs. Finally, we use an illustrated example to expound our proposed methods and verify their performances.

1. Introduction

Yager [1] introduced an operator of power average (PA) to provide more versatility in the information aggregation process. PA is a nonlinear weighted average aggregation tool for which the weight vector depends on the input arguments and that allows the values being aggregated to support and reinforce each other [2]. It has received a large amount of attention in the literature. For instance, Xu and Yager [2] developed power geometric operator on the basis of a geometric mean (GM) and power average (PA). Under the linguistic environment, Xu et al. [3] developed new linguistic aggregation operators based on the power average (PA) to address the relationship of input arguments. Zhou and Chen [4] discussed a generalization of the power aggregation operators for linguistic environment and its application in group decision making (GDM). By extending the PA to the linguistic hesitant fuzzy environment, Zhu et al. [5] established a series of linguistic hesitant fuzzy power aggregation operators. With the above-mentioned literature, PA has successfully been extended to many complex and real situations.
One of the useful theories in dealing with the multi-criteria decision making (MCDM) problems is the theory of probabilistic linguistic term sets (PLTSs). This theory proposed by Pang et al. [6] plays a key role in the decision process where experts express their preferences [7,8,9]. Nowadays, PLTSs have become a hot topic in the area of hesitant fuzzy linguistic term sets (HFLTSs) [10,11,12] and hesitant fuzzy sets (HFSs) [13,14]. For example, Pang et al. [6] established a framework for ranking PLTSs and they conducted a comparison method via the score or deviation degree of each PLTS. Bai et al. [7] stated that the existing approaches associated with PLTSs are limited or highly complex in real applications. Thus they established more appropriate comparison method and developed a more efficient way to handle PLTSs. Gou and Xu [15] defined novel operational laws for the probability information. He et al. [16] proposed an algorithm for multi-criteria group decision making (MCGDM) with probabilistic interval preference orderings. Wu and Xu [17] defined the concept of possibility distribution and presented a new framework model to address MCDM. Zhang et al. [18] introduced the concept of probabilistic linguistic preference relations to present the DMs preferences. Under the hesitant probabilistic fuzzy environment, Zhou and Xu [19] studied the consensus building with a group of decision makers. PLTSs generalize the existing models of HFLTSs and HFSs so as to contain hesitations and probabilities. Compared with HFLTSs, the PLTSs have strong ability to express the information vagueness and uncertainty in the hesitant situations under qualitative setting. With respect to the PLTSs, the decision makers (DMs) can not only provide several possible linguistic values over an object (alternative or attribute), but also reflect the probabilistic information of the set of values [6]. In the existing literature, most aggregation operators developed for PLTSs are based on the independence assumption and do not take into account information about the interrelationship between PLTSs being aggregated.
For the PLTSs, it also can encounter the relationship phenomenon between the input arguments. Meanwhile, PA provides a versatility in the aggregation process and has the ability to depict the interrelationship of input arguments, i.e., it allows the input argument being aggregated to support and reinforce each other. However, it rarely discusses in the research works of PLTSs. Hence, we introduce PA into PLTSs and come out with new operators that will improved upon the existing aggregation operators of PLTSs. In this paper, we firstly develop four new aggregation operators based on the Power Average (PA) and the Power Geometric (PG), i.e., probabilistic linguistic power average (PLPA), weighted probabilistic linguistic power average (WPLPA), probabilistic linguistic power geometric (PLPG) and weighted probabilistic linguistic power geometric (WPLPG). These operators take into account all the decision arguments and their relationships. On the basis of probabilistic linguistic GDM, we utilize the WPLPA or WPLPG operator to aggregate the information and design the corresponding approach. In a word, the desirable advantages of our research work are summarized as follows: (1) We involve the probabilistic information. Our proposed methods can allow the collection of a few different linguistic terms evaluated by the DMs and the opinions of the DMs will still remain the same. (2) Our proposed methods also consider the interrelationship of the individual evaluation.
The rest of the paper is structured as follows: Some basic concepts and operations in relation to PLTSs and PA are introduced in Section 2. In Section 3, we develop the PLPA operator, PLPG operator and their own corresponding weighted forms. Meanwhile, we also study several desired properties of these operators. In Section 4, we design the approaches for the application of MCGDM utilizing the WPLPG and WPLPA operators. In Section 5, we give an illustrative example to elaborate and verify our proposed methods. Section 6 concludes the paper and elaborates on future studies.

2. Preliminaries

In this section, we mainly review some basic concepts and operations in relation to PLTSs and PA.

2.1. Probabilistic Linguistic Term Sets (PLTSs)

The concept of PLTSs [6] is an extension of the concepts of HFLTSs. In the following, we review some basic concepts of PLTSs and the corresponding operations.
Definition 1.
[6] Let S = { s t | t = 0 , 1 , , τ } be a linguistic term set. Then a probabilistic linguistic term set (PLTS) is defined as:
L ( p ) = { L ( k ) ( p ( k ) ) | L ( k ) S , r ( k ) t , p ( k ) 0 , k = 1 , 2 , , # L ( p ) , k = 1 # L ( p ) p ( k ) 1 } ,
where L ( k ) ( p ( k ) ) is the linguistic term L ( k ) associated with the probability p ( k ) , r ( k ) is the subscript of L ( k ) and # L ( p ) is the number of all linguistic terms in L ( p ) .
Since the positions of elements in a set can be swapped arbitrarily, Pang et al. [6] proposed the ordered PLTSs to ensure that the operational results among PLTSs can be straightforwardly determined. It is described as:
Definition 2.
Given a PLTS L ( p ) = { L ( k ) ( p ( k ) ) | k = 1 , 2 , , # L ( p ) } , and r ( k ) is the subscript of linguistic term L ( k ) . L ( p ) is called an ordered PLTS, if the linguistic terms L ( k ) ( p ( k ) ) are arranged according to the values of r ( k ) p ( k ) in descending order.
Definition 3.
Let S = { s t | t = 0 , 1 , , τ } be a linguistic term set. Given three PLTSs L ( p ) , L 1 ( p ) and L 2 ( p ) , their basic operations are summarized as follows [6]:
(1) 
L 1 ( p ) L 2 ( p ) = L 1 ( k ) L 1 ( p ) , L 2 ( k ) L 2 ( p ) p 1 ( k ) L 1 ( k ) p 2 ( k ) L 2 ( k ) ;
(2) 
L 1 ( p ) L 2 ( p ) = L 1 ( k ) L 1 ( p ) , L 2 ( k ) L 2 ( p ) ( L 1 ( k ) ) p 1 ( k ) ( L 2 ( k ) ) p 2 ( k ) ;
(3) 
λ ( L ( p ) ) = L ( k ) L ( p ) λ p ( k ) L ( k ) and λ 0 ;
(4) 
( L ( p ) ) λ = L ( k ) L ( p ) ( L ( k ) ) λ p ( k ) and λ 0 .
To compare the PLTSs, Pang et al. [6] defined the score and the deviation degree of a PLTS:
Definition 4.
Let L ( p ) = { L ( k ) ( p ( k ) ) | k = 1 , 2 , , # L ( p ) } be a PLTS, and r ( k ) is the subscript of linguistic term L ( k ) . Then, the score of L ( p ) is defined as follows:
E ( L ( p ) ) = s α ¯ ,
where α ¯ = k = 1 # L ( p ) r ( k ) p ( k ) / k = 1 # L ( p ) p ( k ) . The deviation degree of L ( p ) is:
σ ( L ( p ) ) = ( k = 1 # L ( p ) ( p ( k ) ( r ( k ) α ¯ ) ) 2 ) 0.5 k = 1 # L ( p ) p ( k ) .
Based on the score and the deviation degree of a PLTS, Pang et al. [6] further proposed the following laws to compare them.
Definition 5.
Given two PLTSs L 1 ( p ) and L 2 ( p ) . E ( L 1 ( p ) ) and E ( L 2 ( p ) ) are the scores of L 1 ( p ) and L 2 ( p ) , respectively. σ ( L 1 ( p ) ) and σ ( L 2 ( p ) ) denote the deviation degrees of L 1 ( p ) and L 2 ( p ) . Then, we have:
(1) 
If E ( L 1 ( p ) ) > E ( L 2 ( p ) ) , then L 1 ( p ) is bigger than L 2 ( p ) , denoted by L 1 ( p ) > L 2 ( p ) ;
(2) 
If E ( L 1 ( p ) ) < E ( L 2 ( p ) ) , then L 1 ( p ) is smaller than L 2 ( p ) , denoted by L 1 ( p ) < L 2 ( p ) ;
(3) 
If E ( L 1 ( p ) ) = E ( L 2 ( p ) ) , then we need to compare their deviation degrees:
(a) 
If σ ( L 1 ( p ) ) = σ ( L 2 ( p ) ) , then L 1 ( p ) is equal to L 2 ( p ) , denoted by L 1 ( p ) L 2 ( p ) ;
(b) 
If σ ( L 1 ( p ) ) > σ ( L 2 ( p ) ) , then L 1 ( p ) is smaller than L 2 ( p ) , denoted by L 1 ( p ) < L 2 ( p ) ;
(c) 
If σ ( L 1 ( p ) ) < σ ( L 2 ( p ) ) , then L 1 ( p ) is bigger than L 2 ( p ) , denoted by L 1 ( p ) > L 2 ( p ) .
When we analyze and discuss the comparison of PLTSs, we may realise that the number of their corresponding number of the linguistic terms may not be equal. To solve this problem, Pang et al. [6] normalized the PLTSs by increasing the numbers of linguistic terms for PLTSs. The normalized Definition of PLTSs is the following.
Definition 6.
Let L 1 ( p ) = { L 1 ( k ) ( p 1 ( k ) ) | k = 1 , 2 , , # L 1 ( p ) } and L 2 ( p ) = { L 2 ( k ) ( p 2 ( k ) ) | k = 1 , 2 , , # L 2 ( p ) } be any two PLTSs. # L 1 ( p ) and # L 2 ( p ) are the numbers of the linguistic terms in L 1 ( p ) and L 2 ( p ) . If # L 1 ( p ) > # L 2 ( p ) , then we will add # L 1 ( p ) # L 2 ( p ) linguistic terms to L 2 ( p ) so that the numbers of linguistic terms in L 1 ( p ) and L 2 ( p ) are identical. The added linguistic terms are the smallest ones in L 2 ( p ) and the probabilities of all the linguistic terms are zero. Analogously, if # L 1 ( p ) < # L 2 ( p ) , we can use the similar method.
Based on the normalized PLTSs, Pang et al. [6] further defined the deviation degree between PLTSs. The result is shown as follows.
Definition 7.
[6] Let L 1 ( p ) = { L 1 ( k ) ( p 1 ( k ) ) | k = 1 , 2 , , # L 1 ( p ) } and L 2 ( p ) = { L 2 ( k ) ( p 2 ( k ) ) | k = 1 , 2 , , # L 2 ( p ) } be any two PLTSs, if # L 1 ( p ) = # L 2 ( p ) , then the deviation degree between PLTSs is defined as:
d ( L 1 ( p ) , L 2 ( p ) ) = k = 1 # L 1 ( p ) ( r 1 ( k ) p 1 ( k ) r 2 ( k ) p 2 ( k ) ) 2 / # L 1 ( p ) .

2.2. Power Average (PA)

Information fusion is a process of aggregating data operators from different resources by proper aggregating operators. Power average (PA) operator, as a technique of fusing information, was introduced by Yager [1], which allows the arguments to support each other in the aggregation process.
Definition 8.
[1] Let A = { a 1 , a 2 , , a n } be a collection of non-negative numbers. The power aggregation is defined as follows:
P A ( a 1 , a 2 , , a n ) = i = 1 n ( 1 + T ( a i ) ) a i i = 1 n ( 1 + T ( a i ) ) ,
where
T ( a i ) = j = 1 , j i n s u p ( a i , a j ) .
In this case, s u p ( a i , a j ) is denoted as the support for a i from a j , which satisfies the following three properties:
(1) 
s u p ( a i , a j ) [ 0 , 1 ] ;
(2) 
s u p ( a i , a j ) = s u p ( a j , a i ) ;
(3) 
s u p ( a i , a j ) s u p ( a i , a k ) , if | a i a j | < | a i a k | .
From the result of Definition 7, the supports among the input arguments are involved in the PA. In general, s u p ( a i , a j ) can be measured by the distance between the arguments, e.g., d ( a i , a j ) . By introducing geometric mean (GM), Xu and Yager [2] defined a power geometric (PG) operator as follows:
P G ( a 1 , a 2 , , a n ) = i = 1 n a i ( 1 + T ( a i ) ) i = 1 n ( 1 + T ( a i ) ) ,
where a i ( i = 1 , 2 , , n ) are a collection of arguments, and T ( a i ) satisfies the condition above.

3. Probabilistic Linguistic Power Aggregation Operators

Under the probabilistic linguistic environment, we assume that the input arguments are PLTSs and we mainly study the extension of power average (PA) and power geometric (PG) aggregation operators.

3.1. Probabilistic Linguistic Power Average (PLPA) Aggregation Operators

In this section, we discuss the extension of power average (PA) aggregation operators to accommodate the probabilistic linguistic environment. In the following, some probabilistic linguistic power average aggregation operators should be developed, which allow the input data to support each other in the aggregation process, i.e., Probabilistic Linguistic Power Average (PLPA) and Weighted Probabilistic Linguistic Power Average (WPLPA).

3.1.1. PLPA

Based on the results of Definitions 1 and 7, we present the Definition of the PLPA aggregation operator as follows:
Definition 9.
Let L ( p ) = L i ( k ) ( p i ( k ) ) | k = 1 , 2 , , # L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs. A probabilistic linguistic power average (PLPA) is a mapping L n ( p ) L ( p ) such that:
P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( 1 + T ( L i ( p ) ) ) L i ( p ) i = 1 n ( 1 + T ( L i ( p ) ) ) ,
where:
T ( L i ( p ) ) = j = 1 , j i n s u p ( L i ( p ) , L j ( p ) ) .
and s u p ( L i ( p ) , L j ( p ) ) is considered to be the support for L i ( p ) from L j ( p ) which satisfies the following properties:
(1) 
s u p ( L i ( p ) , L j ( p ) ) [ 0 , 1 ] ;
(2) 
s u p ( L i ( p ) , L j ( p ) ) = s u p ( L j ( p ) , L i ( p ) ) ;
(3) 
s u p ( L i ( p ) , L j ( p ) ) s u p ( L i ( p ) , L k ( p ) ) if d ( L i ( p ) , L j ( p ) ) < d ( L i ( p ) , L k ( p ) ) .
In light of the operations law (1) of Definition 3, Definition 8 can be transformed into the following form:
P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = ( 1 + T ( L 1 ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) L 1 ( p ) ( 1 + T ( L 2 ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) L 2 ( p ) ( 1 + T ( L n ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) L n ( p ) .
Hence, we can deduce the following result from Definition 8.
Proposition 1.
Let L ( p ) = L i ( k ) ( p i ( k ) ) | k = 1 , 2 , , # L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs. A probabilistic linguistic power average (PLPA) is calculated as:
P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n v i L i ( p ) = L 1 ( k ) L 1 ( p ) v 1 p 1 ( k ) L 1 ( k ) L 2 ( k ) L 2 ( p ) v 2 p 2 ( k ) L 2 ( k ) L n ( k ) L n ( p ) v n p n ( k ) L n ( k ) ,
where v i = ( 1 + T ( L i ( p ) ) ) j = 1 n ( 1 + T ( L j ( p ) ) ) ( i = 1 , 2 , , n ) .
On the basis of Definition 8 and Proposition 1, it can easily be proven that the PLPA aggregation operator has the following desirable properties.
Theorem 1.
(Commutativity) Let ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) be any permutation of ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) , then P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = P L P A ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) .
Proof. 
According to the result of Definition 2, L i ( p ) is called an ordered PLTS ( i = 1 , 2 , , n ) . By Proposition 1 and the operations law (1) of Definition 3, we can conclude that:
P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = P L P A ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) .
Therefore, we complete the proof of Theorem 1. ☐
Theorem 2.
(Idempotency) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs. If all L i ( p ) ( i = 1 , 2 , , n ) are equal, i.e., L i ( p ) = L ( p ) , then P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L ( p ) .
Proof. 
If L i ( p ) = L ( p ) for all i, then P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) is computed as:
P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( 1 + T ( L i ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) L i ( p ) = i = 1 n 1 n L i ( p ) = L i ( p ) .
Hence, the statement of Theorem 2 holds. ☐
Theorem 3.
(Boundedness) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then we have:
min i = 1 n min k = 1 # L i ( p ) p i ( k ) L i ( k ) L max i = 1 n max k = 1 # L i ( p ) p i ( k ) L i ( k ) ,
where L P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
Proof. 
According to the result of Proposition 1, P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) is computed as:
P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n v i L i ( p ) = L 1 ( k ) L 1 ( p ) v 1 p 1 ( k ) L 1 ( k ) L 2 ( k ) L 2 ( p ) v 2 p 2 ( k ) L 2 ( k ) L n ( k ) L n ( p ) v n p n ( k ) L n ( k ) .
Then, we can deduce the following relationship:
min i = 1 n min k = 1 # L i ( p ) p i ( k ) L i ( k ) p i ( k ) L i ( k ) max i = 1 n max k = 1 # L i ( p ) p i ( k ) L i ( k ) .
By utilizing the result of Theorem 2, we can easily finish the proof of Theorem 1. ☐
Theorem 4.
(Monotonicity) Let L i ( p ) and L i ( p ) * be two sets of PLTSs and the numbers of linguistic terms in L i ( p ) and L i ( p ) * are identical ( i = 1 , 2 , , n ) . If L i ( k ) ( p i ( k ) ) L i ( k ) ( p i ( k ) ) * for all i, i.e., L i ( p ) L i ( p ) * , then P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) P L P A ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) .
Theorem 5.
Let s u p ( L i ( p ) , L j ( p ) ) = k for all i j , then P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = 1 n L i ( p ) .
Proof. 
If s u p ( L i ( p ) , L j ( p ) ) = k for all i j , it indicates that all the supports are the same. In this situation, the PLPA operator is computed as follows:
P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( 1 + T ( L i ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) L i ( p ) = i = 1 n ( 1 + ( n 1 ) k ) i = 1 n ( 1 + ( n 1 ) k ) L i ( p ) = i = 1 n 1 n L i ( p ) .
It is a simple probabilistic linguistic averaging operator. Hence, the statement of Theorem 5 holds. ☐

3.1.2. WPLPA

With respect to the PLPA operator, the weights of the arguments should be considered, because each argument that is being aggregated has a weight indicating its importance [3]. Based on this idea, we extend the PLPA and give the Definition of the weighted probabilistic linguistic power average (WPLPA) operator as follows:
Definition 10.
Let L i ( p ) be a collection of PLTSs. w = ( w 1 , w 2 , , w n ) T denotes the weighting vector of L i ( p ) and w i [ 0 , 1 ] , i = 1 n w i = 1 . Given the value of the weight vector w = ( w 1 , w 2 , , w n ) T , we define weighted probabilistic linguistic power average (WPLA) operator as follows:
W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n w i ( 1 + T ( L i ( p ) ) ) L i ( p ) i = 1 n w i ( 1 + T ( L i ( p ) ) ) .
In this case, T ( L i ( p ) ) = j = 1 , j i n w j s u p ( L i ( p ) , L j ( p ) ) .
Based on the operations of the PLTSs described in Definition 3, we can derive the following Proposition 2.
Proposition 2.
Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then their aggregated values by using the WPLPA operator is also a PLTS, and:
W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L 1 ( k ) L 1 ( p ) v 1 p 1 ( k ) L 1 ( k ) L 2 ( k ) L 2 ( p ) v 2 p 2 ( k ) L 2 ( k ) L n ( k ) L n ( p ) v n p n ( k ) L n ( k ) .
where v i = w i ( 1 + T ( L i ( p ) ) ) j = 1 n w j ( 1 + T ( L j ( p ) ) ) ( i = 1 , 2 , , n ) .
Especially, if s u p ( L i ( p ) , L j ( p ) ) = 0 for all i j , then T ( L i ( p ) = 0 . Thus, W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n w i L i ( p ) . Under this situation, the WPLPA operator reduces to PLWA proposed by Ref. [6]. If the weight vector w = ( w 1 , w 2 , , w n ) T = ( 1 n , 1 n , , 1 n ) T , v i of Proposition 2 is computed as:
v i = w i ( 1 + T ( L i ( p ) ) ) j = 1 n w j ( 1 + T ( L j ( p ) ) ) = ( 1 + T ( L i ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) = v i .
Thus, the WPLPA operator is computed as:
W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L 1 ( k ) L 1 ( p ) v 1 p 1 ( k ) L 1 ( k ) L 2 ( k ) L 2 ( p ) v 2 p 2 ( k ) L 2 ( k ) L n ( k ) L n ( p ) v n p n ( k ) L n ( k ) = P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
It indicates that the WPLPA reduces to the PLPA operator. According to the results of Definitions 3 and 10, it can easily prove that the WPLPA operator has the following properties.
Theorem 6.
(Idempotency) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, if all L i ( p ) ( i = 1 , 2 , , n ) are equal, i.e., L i ( p ) = L ( p ) , then W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L ( p ) .
Proof. 
If L i ( p ) = L ( p ) for all i, then W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) is computed as:
W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n w i ( 1 + T ( L i ( p ) ) ) L i ( p ) i = 1 n w i ( 1 + T ( L i ( p ) ) ) = i = 1 n 1 n L i ( p ) = L i ( p ) .
Thus, the statement of Theorem 6 holds. ☐
Theorem 7.
(Boundedness) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then we have:
min i = 1 n min k = 1 # L i ( p ) p i ( k ) L i ( k ) L max i = 1 n max k = 1 # L i ( p ) p i ( k ) L i ( k ) .
where L W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
If we let s u p ( L i ( p ) , L j ( p ) ) = k for all i j , we have: T ( L i ( p ) ) = j = 1 , j i n w j s u p ( L i ( p ) , L j ( p ) ) = k j = 1 , j i n w j ( i = 1 , 2 , , n ) . Based on the result of Definition 10, we have:
W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n w i ( 1 + k j = 1 , j i n w j ) i = 1 n w i ( 1 + k j = 1 , j i n w j ) L i ( p ) .
In this case, W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) is not equivalent to P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = 1 n i = 1 n L i ( p ) .
Theorem 8.
Let ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) be any permutation of ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) , then we can deduce the following relationship:
W P L P A ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
Proof. 
According to the result of Definition 10, we can obtain:
T ( L p * ) = j = 1 , j i n w j s u p ( L i ( p ) * , L j ( p ) * ) .
Then, we can deduce:
W P L P A ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n w i ( 1 + T ( L i ( p ) * ) ) i = 1 n w i ( 1 + T ( L i ( p ) * ) ) L i ( p ) * .
Since ( T ( L 1 ( p ) * ) , T ( L 2 ( p ) * ) , , T ( L 2 ( p ) * ) ) may not be the permutation of ( T ( L 1 ( p ) ) , T ( L 2 ( p ) ) , , T ( L n ( p ) ) ) , we can judge that the WPLPA operator is not commutative. Therefore, we complete the proof of Theorem 8. ☐

3.2. Probabilistic Linguistic Power Geometric (PLPG) Aggregation Operators

In this section, we investigate the extension of power geometric (PG) aggregation operators under the probabilistic linguistic environment, i.e., the probabilistic linguistic power geometric (PLPG) and weighted probabilistic linguistic power geometric (WPLPG).

3.2.1. PLPG

By utilizing the results of Definition 1 and Equation (7), we present the Definition of the PLPG operator as follows.
Definition 11.
Let L ( p ) = L i ( k ) ( p i k ) | k = 1 , 2 , , # L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs. A probabilistic linguistic power geometric (PLPG) operator is a mapping L n ( p ) L ( p ) such that:
P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) ) ( 1 + T ( L i ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) ,
where T ( L i ( p ) ) = j = 1 , j i n s u p ( L i ( p ) , L j ( p ) ) . s u p ( L i ( p ) , L j ( p ) ) is considered to be the support of L i ( p ) from L j ( p ) which also satisfies the following properties:
(1) 
s u p ( L i ( p ) , L j ( p ) ) [ 0 , 1 ] ;
(2) 
s u p ( L i ( p ) , L j ( p ) ) = s u p ( L j ( p ) , L i ( p ) ) ;
(3) 
s u p ( L i ( p ) , L j ( p ) ) s u p ( L j ( p ) , L i ( p ) ) if d ( L i ( p ) , L j ( p ) ) < d ( L i ( p ) , L k ( p ) ) .
By the operations law (2) of Definition 3, Definition 11 can be transformed into the following form:
P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = ( L 1 ( p ) ) ( 1 + T ( L 1 ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) ( L 2 ( p ) ) ( 1 + T ( L 2 ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) ( L n ( p ) ) ( 1 + T ( L n ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) .
Therefore, we can deduce the following based on the results of Definition 9:
Proposition 3.
Let L ( p ) = L i ( k ) ( p i k ) | k = 1 , 2 , , # L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs. A probabilistic linguistic power geometric (PLPG) operator is calculated as:
P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) ) v i = L 1 ( k ) L 1 ( p ) ( L 1 ( k ) ) v 1 p 1 ( k ) L 2 ( k ) L 2 ( p ) ( L 2 ( k ) ) v 2 p 2 ( k ) L n ( k ) L n ( p ) ( L n ( k ) ) v n p n ( k ) ,
where v i = ( 1 + T ( L i ( p ) ) ) j = 1 n ( 1 + T ( L j ( p ) ) ) ( i = 1 , 2 , , n ) .
On the basis of Definition 11 and Proposition 3, it can be proved that the PLPG operator has the following desirable properties:
Theorem 9.
(Commutativity) Let ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) be any permutation of ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) then P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = P L P G ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) .
Proof. 
According to the result of Definition 2, L i ( p ) is called an ordered PLTS ( i = 1 , 2 , , n ) . By the results of Proposition 3 and the operations laws (2) of Definition 3, we can conclude that:
P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = P L P G ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) .
Therefore, we complete the proof of Theorem 9. ☐
Theorem 10.
(Idempotency) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs. If all L i ( p ) ( i = 1 , 2 , , n ) are equal, i.e., L i ( p ) = L ( p ) , then P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L ( p ) .
Proof. 
If L i ( p ) = L ( p ) for all i, then P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) is computed as:
P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) ) ( 1 + T ( L i ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) = i = 1 n ( L i ( p ) ) 1 n = L ( p ) .
Hence, the statement of Theorem 10 holds. ☐
Theorem 11.
(boundedness) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then we have:
min i = 1 n min k = 1 # L i ( p ) ( L i ( k ) ) p i ( k ) L max i = 1 n max k = 1 # L i ( p ) ( L i ( k ) ) p i ( k ) ,
where L P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
Proof. 
According to the result of Proposition 3, P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) is computed as:
P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) ) v i = L 1 ( k ) L 1 ( p ) ( L 1 ( k ) ) v 1 p 1 ( k ) L 2 ( k ) L 2 ( p ) ( L 2 ( k ) ) v 2 p 2 ( k ) L n ( k ) L n ( p ) ( L n ( k ) ) v n p n ( k ) .
Then, we can deduce the following relationship:
min i = 1 n min k = 1 # L i ( p ) ( L i ( k ) ) p i ( k ) ( L i ( k ) ) p i ( k ) max i = 1 n max k = 1 # L i ( p ) ( L i ( k ) ) p i ( k ) .
In light of the results of Theorem 10, we can easily finish the proof of Theorem 11. ☐

3.2.2. WPLPG

Considering the importance of the aggregated arguments, we extend the PLPG and give the Definition of the weighted probabilistic linguistic power geometric (WPLPG) operator as following.
Definition 12.
Let L i ( p ) be a collection of PLTSs. w = ( w 1 , w 2 , , w n ) T denotes the weighting vector of L i ( p ) , w i [ 0 , 1 ] and i = 1 n w i = 1 . Given the value of the weight vector w = ( w 1 , w 2 , , w n ) T , we define weighted probabilistic linguistic power geometric (WPLPG) operator as follows:
W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) ) w i ( 1 + T ( L i ( p ) ) ) i = 1 n w i ( 1 + T ( L i ( p ) ) ) .
In this case, T ( L i ( p ) ) = j = 1 , j i n w j s u p ( L i ( p ) , L j ( p ) ) .
Based on the operations of the PLPTs described in Definition 3, we can derive the following Proposition:
Proposition 4.
Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then their aggregated values by using the WPLPG operator is also a PLTS, and:
W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L 1 ( k ) L 1 ( p ) ( L 1 ( k ) ) v 1 p 1 ( k ) L 2 ( k ) L 2 ( p ) ( L 2 ( k ) ) v 2 p 2 ( k ) L n ( k ) L n ( p ) ( L n ( k ) ) v n p n ( k ) .
where v i = w i ( 1 + T ( L i ( p ) ) ) j = 1 n w j ( 1 + T ( L j ( p ) ) ) ( i = 1 , 2 , , n ) .
For the result of Proposition 4, if s u p ( L i ( p ) , L j ( p ) ) = 0 for all i j , then T ( L i ( p ) ) = 0 . Thus, we have:
W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) ) w i .
Under this situation, the WPLPG operator reduces to PLWG proposed by Ref. [6]. If the weight vector w = ( w 1 , w 2 , , w n ) T = ( 1 n , 1 n , , 1 n ) T , v i of Proposition 4 is computed as:
v i = w i ( 1 + T ( L i ( p ) ) ) j = 1 n w j ( 1 + T ( L j ( p ) ) ) = ( 1 + T ( L i ( p ) ) ) i = 1 n ( 1 + T ( L i ( p ) ) ) = v i .
Hence, the WPLPG operator is computed as:
W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L 1 ( k ) L 1 ( p ) ( L 1 ( k ) ) v 1 p 1 ( k ) L 2 ( k ) L 2 ( p ) ( L 2 ( k ) ) v 2 p 2 ( k ) L n ( k ) L n ( p ) ( L n ( k ) ) v n p n ( k ) = P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
Thus, it indicates that the WPLPG can be reduced to the PLPG operator. According to the results of Definitions 3 and 12, it can easily prove that the WPLPG operator has the following properties.
Theorem 12.
(Idempotency) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, if all L i ( p ) ( i = 1 , 2 , , n ) are equal, i.e., L i ( p ) = L ( p ) , then W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = L ( p ) .
Proof. 
If L i ( p ) = L ( p ) for all i, then W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) is computed as:
W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) ) w i ( 1 + T ( L i ( p ) ) ) i = 1 n w i ( 1 + T ( L i ( p ) ) ) = i = 1 n ( L i ( p ) ) 1 n = L ( p ) .
Thus, the statement of Theorem 12 holds. ☐
Theorem 13.
(Boundedness) Let L i ( p ) ( i = 1 , 2 , , n ) be a collection of PLTSs, then we have:
min i = 1 n min k = 1 # L i ( p ) p i ( k ) L i ( k ) L max i = 1 n max k = 1 # L i ( p ) p i ( k ) L i ( k ) ,
where L W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
Theorem 14.
Let ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) is any permutation of ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) , then we can deduce the following relationship:
W P L P G ( L 1 ( p ) * , L 2 ( p ) * , , L n ( p ) * ) W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) .
Proof. 
According to the result of Definition 12, we can obtain:
T ( L p * ) = j = 1 , j i n w j s u p ( L i ( p ) * , L j ( p ) * ) .
Then, we can deduce:
W P L P G ( L 1 ( p ) , L 2 ( p ) , , L n ( p ) ) = i = 1 n ( L i ( p ) * ) w i ( 1 + T ( L i ( p ) * ) i = 1 n w i ( 1 + T ( L i ( p ) * ) .
Since ( T ( L 1 ( p ) * ) , T ( L 2 ( p ) * ) , , T ( L 2 ( p ) * ) ) may not be the permutation of ( T ( L 1 ( p ) ) , T ( L 2 ( p ) ) , , T ( L n ( p ) ) ) , we can judge that the WPLPG operator is not commutative. Hence, we complete the proof of Theorem 14. ☐

4. Approaches to Multi-Criteria Group Decision Making with Probabilistic Linguistic Power Aggregation Operators

In this section, we firstly present a MCGDM problem in which the evaluation information may be expressed by PLTSs. Then, we utilize the WPLPA or WPLPG operator to support our decision. Let X = { x 1 , x 2 , , x m } be a finite set of m alternatives and C = { c 1 , c 2 , , c n } be a set of n attributes. Suppose that D = { d 1 , d 2 , , d e } denotes the set of DMs. By using the linguistic scale S = { s α | α = 0 , 1 , , τ } , each DM d q provides his or her linguistic evaluations over the alternative x i with respect to the attribute a j , i.e., A q = ( L i j q ) m × n ( i = 1 , 2 , , m ; j = 1 , 2 , , n ; q = 1 , 2 , , e ) . Then, we determine the collective evaluations of DMs for each alternative in terms of PLTSs. In the context of GDM, the linguistic evaluation values L i j ( k ) ( k = 1 , 2 , , # L i j ( p ) ) with the corresponding probability p i j ( k ) are described as the PLTS L i j ( p ) = { L i j ( k ) ( p i j ( k ) ) | k = 1 , 2 , , # L i j ( p ) } and # L i j ( p ) is the number of linguistic terms in L i j ( p ) . The PLTS L i j ( p ) denotes the evaluation values over the alternative x i ( i = 1 , 2 , , m ) with respect to the attributes c j ( j = 1 , 2 , , n ) , where L i j ( k ) is the k t h value of L i j ( p ) , and p i j ( k ) is the probability of L i j ( k ) ( k = 1 , 2 , , # L i j ( p ) ) . In the case, p i j ( k ) 0 and k = 1 # L i j ( p ) p i j ( k ) = 1 . All the PLTSs are contained in the probabilistic linguistic decision matrix R. Hence, the result is shown as follows:
R = ( L i j ( p ) ) m × n = L 11 ( p ) L 12 ( p ) L 1 n ( p ) L 21 ( p ) L 22 ( p ) L 2 n ( p ) L m 1 ( p ) L m 2 ( p ) L m n ( p ) .
Without loss of generality, we assume that each PLTS L i j ( p ) is an ordered PLTS. w = ( w 1 , w 2 , , w n ) T denotes the weighting vector of the attributes C and w j [ 0 , 1 ] , j = 1 n w j = 1 . Based on the above results, we will use the WPLPA or WPLPG aggregation operator to develop the corresponding approach for MCGDM with probabilistic linguistic information. This approach is designed as follows:
Step 1: According to the practical decision-making problem, we determine the alternatives X = { x 1 , x 2 , , x m } and a set of the attributes C = { c 1 , c 2 , , c n } . Then, we can obtain the decision matrix A q = ( L i j q ) m × n provided by the DM d q . By using the PLTSs, we construct the collective matrix R = ( L i j ( p ) ) m × n .
Step 2: With respect to the collective matrix R = ( L i j ( p ) ) m × n , we can normalize the entries of R as stated in Definition 6.
Step 3: Based on the matrix R and the result of Definition 7, the deviation degree between PLTSs L i j ( p ) and L i t ( p ) is calculated below ( i = 1 , 2 , , m ; j , t = 1 , 2 , , n ) :
d ( L i j ( p ) , L i t ( p ) ) = k = 1 # L i j ( p ) ( p i j ( k ) r i j ( k ) p i t ( k ) r i t ( k ) ) 2 # L i j ( p ) .
Step 4: By using the results of Definitions 7 and 8, we calculate the support of the alternative x i as follows:
s u p ( L i j ( p ) , L i t ( p ) ) = 1 d ( L i j ( p ) , L i t ( p ) ) g = 1 , g j n d ( L i j ( p ) , L i g ( p ) ) ,
which satisfies the support conditions (1)–(3) of Definition 9.
Step 5: According to the result of Definition 10, we can calculate the support T ( L i j ( p ) ) of L i j ( p ) by all of other L i t ( p ) ( j , t = 1 , 2 , , n ; t j ) :
T ( L i j ( p ) ) = t = 1 , t j n w t s u p ( L i j ( p ) , L i t ( p ) ) .
Step 6: With the aid of Proposition 2, we further compute the weight v i j associated with the PLTS L i j ( p ) :
v i j = w j ( 1 + T ( L i j ( p ) ) ) j = 1 n w j ( 1 + T ( L i j ( p ) ) ) .
Step 7: If the DM prefers the WPLPA operator, then the aggregated value of the alternative x i is determined based on Equation (12). The result is:
W P L P A ( L i 1 ( p ) , L i 2 ( p ) , , L i n ( p ) ) = L i 1 ( k ) L i 1 ( p ) v i 1 p i 1 ( k ) L i 1 ( k ) L i 2 ( k ) L i 2 ( p ) v i 2 p i 2 ( k ) L i 2 ( k ) L i n ( k ) L i n ( p ) v i n p i n ( k ) L i n ( k ) .
If the DM uses the WPLPG operator, then the aggregated value of the alternative x i is determined based on Equation (16). The result is:
W P L P G ( L i 1 ( p ) , L i 2 ( p ) , , L i n ( p ) ) = L i 1 ( k ) L i 1 ( p ) ( L i 1 ( k ) ) v i 1 p i 1 ( k ) L i 2 ( k ) L i 2 ( p ) ( L i 2 ( k ) ) v i 2 p i 2 ( k ) L i n ( k ) L i n ( p ) ( L i n ( k ) ) v i n p i n ( k ) .
In this case, we denote the aggregated value of the alternative x i as Z i .
Step 8: Based on the results of Definition 4, the score and the deviation degree of Z i of the alternative x i are computed, i.e., E ( Z i ) and σ ( Z i ) ( i = 1 , 2 , , m ) .
Step 9: Rank all of the alternatives in accordance with the ranking results of Definition 5.

5. An Illustrative Example

In recent years, there has been considerable concern regarding problems associated with undergraduate school rankings, graduate school rankings, evaluating and rewarding university professors in China and other countries of the world. Katz et al. [20] mentioned that these problems always existed and political activism together with various economic recession have worsen them. Katz and his partners were concerned with the criteria for evaluating them. They came out with multiple regression analysis to determine the factors important in salary and promotion decision-making at the university level and developed a more rational means of evaluating and rewarding university professors. They were motivated by the fact that there is a discriminatory policy in rank and reward in the universities which is not necessarily justifiable. They went further to state that rewarding professors goes through an arbitrary and chaotic process and a more equitable system could be instituted to enhance decision-making process. Another concern raised was that decisions on salaries and promotions were made in an intuitive manner in such a way that the weights attached to the various criteria for classification lack clear understanding. In this section, we illustrate our proposed approach by evaluating some university faculty for tenure and promotion in China adapted from Bryson et al. [21]. Hence, we firstly present a MCGDM problem in which the evaluation information may be expressed by PLTSs. Then, we utilize the WPLPA and WPLPG operator to support our decision. In light of the results of Ref. [21], the criteria considered for the assessment of the decision problem are summarized as follows: (1) teaching ( c 1 ); (2) research ( c 2 ); (3) service ( c 3 ). Let X = { x 1 , x 2 , x 3 , x 4 , x 5 } be the set of five alternatives and C = { c 1 , c 2 , c 3 } be the set of three attributes. The linguistic scale is S = { s α | α = 0 , 1 , , 8 } . Suppose that D = { d 1 , d 2 , d 3 , d 4 } denotes the set of DMs. Based on the results of Ref. [22], their evaluations are shown in Table 1, Table 2, Table 3 and Table 4.

5.1. Decision Analysis with Our Proposed Approaches

Based on the proposed approaches of Section 4, we need to fuse the information presented in the decision matrices A 1 A 4 by (17). In the context of GDM, all the PLTSs are contained in the probabilistic linguistic decision matrix R. Hence, the result is shown in Table 5.
For Table 5, each PLTS L i j ( p ) is assumed to be an ordered PLTS ( i = 1 , 2 , 3 , 4 , 5 ; j = 1 , 2 , 3 ). In this case, the weighting vector of the attributes C is w = ( w 1 , w 2 , w 3 ) T = ( 0.3 , 0.4 , 0.3 ) T . We use the WPLPA or WPLPG aggregation operator to analyze the results of Table 5. Based on the above results and the proposed methods of Section 4, the detailed steps are shown as follows:
Step 2: With respect to the collective matrix R = ( L i j ( p ) ) 5 × 3 , we can find that the number of their corresponding number of the linguistic terms is not equal. Thus, we normalize the entries of R as stated in Definition 6. The normalized probabilistic linguistic decision matrix is shown in Table 6.
Step 3: According to the results of Definition 7 and Table 6, the deviation degree between PLTSs L i j ( p ) and L i t ( p ) ( i = 1 , 2 , 3 , 4 , 5 ; j , t = 1 , 2 , 3 ) can be calculated by the following equation:
d ( L i j ( p ) , L i t ( p ) ) = k = 1 # L i j ( p ) ( p i j ( k ) r i j ( k ) p i t ( k ) r i t ( k ) ) 2 # L i j ( p ) .
Then, we can calculate the deviation degree of any two L i j ( p ) , respectively. For alternative x 1 , the deviation degrees are shown as follows:
d ( L 11 , L 12 ) = 0.5303 ; d ( L 12 , L 13 ) = 0.8292 ; d ( L 11 , L 13 ) = 1.2119 .
For alternative x 2 , the deviation degrees are shown as follows:
d ( L 21 , L 22 ) = 0.9014 ; d ( L 22 , L 23 ) = 1.0752 ; d ( L 21 , L 23 ) = 1.8114 .
For alternative x 3 , the deviation degrees are shown as follows:
d ( L 31 , L 32 ) = 0.3536 ; d ( L 32 , L 33 ) = 0.2795 ; d ( L 31 , L 33 ) = 0.125 .
For alternative x 4 , the deviation degrees are shown as follows:
d ( L 41 , L 42 ) = 1.0383 ; d ( L 42 , L 43 ) = 0.875 ; d ( L 41 , L 43 ) = 0.3536 .
For alternative x 5 , the deviation degrees are shown as follows:
d ( L 51 , L 52 ) = 1.00778 ; d ( L 52 , L 53 ) = 0.9185 ; d ( L 51 , L 53 ) = 0.5154 .
Step 4: Based on the results of Definitions 7 and 8, we can calculate the support of the alternative x i by using (18) ( i = 1 , 2 , 3 , 4 , 5 ). The results are summarized as follows:
s u p ( L 11 , L 12 ) = 0.7938 ; s u p ( L 12 , L 13 ) = 0.6775 ; s u p ( L 11 , L 13 ) = 0.5287 .
s u p ( L 21 , L 22 ) = 0.7620 ; s u p ( L 22 , L 23 ) = 0.7161 ; s u p ( L 21 , L 23 ) = 0.5218 .
s u p ( L 31 , L 32 ) = 0.5335 ; s u p ( L 32 , L 33 ) = 0.6313 ; s u p ( L 31 , L 33 ) = 0.8351 .
s u p ( L 41 , L 42 ) = 0.5419 ; s u p ( L 42 , L 43 ) = 0.6140 ; s u p ( L 41 , L 43 ) = 0.8440 .
s u p ( L 51 , L 52 ) = 0.5873 ; s u p ( L 52 , L 53 ) = 0.6238 ; s u p ( L 51 , L 53 ) = 0.7889 .
Step 5: In light of the result of Definition 10, we can calculate the support T ( L i j ( p ) ) of L i j ( p ) by all of other L i t ( p ) ( j , t = 1 , 2 , 3 ; t j ) by the following equation:
T ( L i j ( p ) ) = t = 1 , t j 3 w t s u p ( L i j ( p ) , L i t ( p ) ) .
These results are shown as the following matrix:
T ( L i j ( p ) ) = 0.47613 0.44139 0.42961 0.46134 0.44343 0.44298 0.46393 0.34944 0.50305 0.46996 0.34677 0.49880 0.47159 0.36333 0.48619 .
Step 6: With the aid of Proposition 2, we further compute the weight v i j associated with the PLTS L i j ( p ) by the following equation ( i = 1 , 2 , 3 , 4 , 5 ; j = 1 , 2 , 3 ):
v i j = w j ( 1 + T ( L i j ( p ) ) ) j = 1 3 w j ( 1 + T ( L i j ( p ) ) ) .
These results are shown as the following matrix:
v i j = 0.3057 0.3981 0.2961 0.3026 0.3986 0.2988 0.3071 0.3775 0.3154 0.3085 0.3769 0.3146 0.3082 0.3906 0.3112
Step 7: If the DM prefers the WPLPA operator, then the aggregated value of the alternative x i is determined based on Equation (12) ( i = 1 , 2 , 3 , 4 , 5 ). We denote the aggregated value of the alternative x i as Z i . The results are:
Z 1 = W P L P A ( L 11 ( p ) , L 12 ( p ) , L 13 ( p ) ) = ( ( 0.9171 , 0.6114 , 0.5349 , 0 ) ; ( 1.5924 , 0.6967 , 0.5972 , 0 ) ; ( 1.3325 , 0.3701 , 0 , 0 ) ) = ( 3.8420 , 1.6782 , 1.1321 , 0 ) ,
Z 2 = W P L P A ( L 21 ( p ) , L 22 ( p ) , L 23 ( p ) ) = ( ( 0.6052 , 0.4539 , 0.3738 , 0.3026 ) ; ( 1.3951 , 0.5979 , 0.4983 , 0 ) ; ( 1.5687 , 0.4482 , 0 , 0 ) ) = ( 3.569 , 1.5 , 0.8766 , 0.3026 ) ,
Z 3 = W P L P A ( L 31 ( p ) , L 32 ( p ) , L 33 ( p ) ) = ( ( 0.6052 , 0.4539 , 0.3738 , 0.3026 ) ; ( 1.3951 , 0.5979 , 0.4983 , 0 ) ; ( 1.5687 , 0.4482 , 0 , 0 ) ) = ( 3.3113 , 2 , 1.5176 , 0 ) ,
Z 4 = W P L P A ( L 41 ( p ) , L 42 ( p ) , L 43 ( p ) ) = ( ( 0.6052 , 0.4539 , 0.3738 , 0.3026 ) ; ( 1.3951 , 0.5979 , 0.4983 , 0 ) ; ( 1.5687 , 0.4482 , 0 , 0 ) ) = ( 2.6824 , 1.8116 , 1.4073 , 0.3769 ) ,
Z 5 = W P L P A ( L 51 ( p ) , L 52 ( p ) , L 53 ( p ) ) = ( ( 1.2326 , 0.4622 , 0.3852 , 0 ) ; ( 1.3322 , 1.1419 , 0 , 0 ) ; ( 0.9336 , 0.5446 , 0.3890 , 0 ) ) = ( 3.4985 , 2.1488 , 0.7742 , 0 ) .
If the DM uses the WPLPG operator, then the aggregated value of the alternative x i is determined based on Equation (16) ( i = 1 , 2 , 3 , 4 , 5 ). In the same way, we denote the aggregated value of the alternative x i as Z i . The results are:
Z 1 = W P L P G ( L 11 ( p ) , L 12 ( p ) , L 13 ( p ) ) = ( ( 1.3150 , 1.1722 , 1.1603 , 1 ) ; ( 1.5127 , 1.2137 , 1.1952 , 1 ) ; ( 1.4887 , 1.1265 , 1 , 1 ) ) = ( 2.9613 , 1.6026 , 1.3868 , 1 ) ,
Z 2 = W P L P G ( L 21 ( p ) , L 22 ( p ) , L 23 ( p ) ) = ( ( 1.1703 , 1.1452 , 1.1295 , 1.1106 ) ; ( 1.4738 , 1.1955 , 1.1739 , 1 ) ; ( 1.5466 , 1.132 , 1 , 1 ) ) = ( 2.6676 , 1.5651 , 1.3259 , 1.1106 ) ,
Z 3 = W P L P G ( L 31 ( p ) , L 32 ( p ) , L 33 ( p ) ) = ( ( 1.3482 , 1.1731 , 1.1315 , 1 ) ; ( 1.4024 , 1.2168 , 1.2016 , 1 ) ; ( 1.3592 , 1.1782 , 1.1517 , 1 ) ) = ( 2.5698 , 1.6818 , 1.5658 , 1 ) ,
Z 4 = W P L P G ( L 41 ( p ) , L 42 ( p ) , L 43 ( p ) ) = ( ( 1.3500 , 1.1739 , 1.1322 , 1 ) ; ( 1.2012 , 1.1839 , 1.1637 , 1.1395 ) ; ( 1.3256 , 1.1777 , 1.1654 , 1 ) ) = ( 2.1496 , 1.6367 , 1.5355 , 1.1395 ) ,
Z 5 = W P L P G ( L 51 ( p ) , L 52 ( p ) , L 53 ( p ) ) = ( ( 1.3777 , 1.1480 , 1.1320 , 1 ) ; ( 1.4482 , 1.4064 , 1 , 1 ) ; ( 1.3216 , 1.1635 , 1.1334 , 1 ) ) = ( 2.6367 , 1.8784 , 1.2830 , 1 ) .
Step 8: Based on the results of Definition 4, the scores of the alternative x i can be computed, i.e., E ( Z i ) . If the DM uses WPLPA operator to aggregate the decision formation, the scores are determined as follows:
E ( Z 1 ) = 1.6632 ; E ( Z 2 ) = 1.5620 ; E ( Z 3 ) = 1.7072 ; E ( Z 4 ) = 1.5697 ; E ( Z 5 ) = 1.6054 .
If the the DM uses WPLPG operator to aggregate the decision formation, the scores are determined as follows:
E ( Z 1 ) = 1.7379 ; E ( Z 2 ) = 1.6673 ; E ( Z 3 ) = 1.7044 ; E ( Z 4 ) = 1.6154 ; E ( Z 5 ) = 1.6995 .
Step 9: If the DM uses WPLPA operator, we can determine the ranking of the scores of the alternatives based on the results of the Step 8. It is shown as follows:
E ( Z 3 ) > E ( Z 1 ) > E ( Z 5 ) > E ( Z 4 ) > E ( Z 2 ) .
That is to say, the ordering of the alternatives is:
x 3 > x 1 > x 5 > x 4 > x 2 .
If the DM uses WPLPG operator, we can obtain the ranking of the scores of the alternatives as follows:
E ( Z 1 ) > E ( Z 3 ) > E ( Z 5 ) > E ( Z 2 ) > E ( Z 4 ) .
In this situation, the ordering of the alternatives is:
x 1 > x 3 > x 5 > x 2 > x 4 .

5.2. Comparison Analysis

Under the probabilistic linguistic information, Pang et al. [6] have developed an aggregation-based method for MAGDM. In order to verify the performance of our proposed methods, we compare our decision results with Pang et al. [6] based on our illustrative example. Torra [13], Merigó et al. [22] and Zhang et al. [23] also developed some methods for the lingusitic information and GDM. Thus, we also compare our results with the methods of Refs. [12,22,23]. The decision results are shown in Table 7.
In Table 7, we can find the rank result of the method proposed in Ref. [6] is: x 3 > x 1 > x 5 > x 2 > x 4 . Compared with the decision results of our proposed method with WPLPA, the aggregation-based method with PLTSs can select the same best candidate, i.e., x 3 . Meanwhile, for the WPLPG, the best candidate is x 1 . Under the result of Ref. [23], HFLWA has the rank: x 3 > x 1 > x 5 > x 2 = x 4 . Meanwhile, the ranking of HFLWG is x 1 > x 2 > x 5 > x 3 > x 4 . By using the max lower operator of Ref. [12], we can find the rank is: x 3 > x 2 = x 5 = x 4 > x 1 . For ILGCIA with group decision making of Ref. [22], the result is x 3 > x 2 > x 1 > x 4 > x 5 . On the MCGDM problems under linguistic environment, we introduced our model to achieve the same acceptable performance with the existing techniques or to improve upon them. Unlike the existing models considered in this paper, our model contains probabilities which normally help in getting a comprehensive and accurate preference information of the DMs [6]. In Ref. [3], for instance, the developed approaches take all the decisions and their relationships into account, and the decision arguments reinforce and support each other, but since probabilities were not considered, the accuracy of preference information of the DMs might be questionable. In addition, without the PLTS, it might not be easy for the DMs to provide several possible linguistic values over an alternative or an attribute. This situation translates into some kind of limitation of the model proposed in Ref. [3] inspite of the power average (PA) involvement in the aggregation process.The PLTSs itself as a theory has some limitations . In general, WPLPG applies to the average of the ratio data and is mainly used to calculate the average growth (or change) rate of the data. From the trait of Table 6, the WPLPA is much better than WPLPG.

6. Conclusions

With respect to the support and reinforcement among input arguments with PLTSs, we introduce PA into the probabilistic linguistic environment. Meanwhile, we develop the corresponding new operators, i.e., the PLPA, PLPG, WPLPA and WPLPG operators. In light of the PLMCGDM, we describe the decision-making problem and design corresponding approaches by employing the WPLPA and WPLPG. In this paper, we expanded the applied field of the original PA and enrich the research work of PLTSs. Future research work may focus on exploring the decision-making mechanisms when the weight information is unknown or incomplete and developing some new generalized aggregation operators of PLTSs. In addition, we also deeply investigate a more complex case study with more alternatives and criteria.

Acknowledgments

This work is partially supported by the National Science Foundation of China (Nos. 71401026, 71432003, 71571148), the Fundamental Research Funds for the Central Universities of China (No. ZYGX2014J100), the Social Science Planning Project of the Sichuan Province (No. SC15C009) and the Sichuan Youth Science and Technology Innovation Team (2016TD0013).

Author Contributions

Decui Liang designed the reaserach work and the basic idea. Agbodah Kobina analyzed the data and finished the deduction procedure. Xin He also analyzed the data and modified the expression.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yager, R.R. The power average operator. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2001, 31, 724–731. [Google Scholar] [CrossRef]
  2. Xu, Z.S.; Yager, R.R. Power-Geometric operators and their use in group decision making. IEEE Trans. Fuzzy Syst. 2010, 18, 94–105. [Google Scholar]
  3. Xu, Y.J.; Merigó, J.M.; Wang, H.M. Linguistic power aggregation operators and their application to multiple attribute group decision making. Appl. Math. Model. 2012, 36, 5427–5444. [Google Scholar] [CrossRef]
  4. Zhou, L.G.; Chen, H.Y. A generalization of the power aggregation operators for linguistic environment and its application in group decision making. Knowl. Based Syst. 2012, 26, 216–224. [Google Scholar] [CrossRef]
  5. Zhu, C.; Zhu, L.; Zhang, X. Linguistic hesitant fuzzy power aggregation operators and their applications in multiple attribute decision-making. Inf. Sci. 2016, 367–368, 809–826. [Google Scholar] [CrossRef]
  6. Pang, Q.; Wang, H.; Xu, Z.S. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  7. Bai, C.Z.; Zhang, R.; Qian, L.X.; Wu, Y.N. Comparisons of probabilistic linguistic term sets for multi-criteria decision making. Knowl. Based Syst. 2017, 119, 284–291. [Google Scholar] [CrossRef]
  8. Merigó, J.M.; Casanovas, M.; Martínez, L. Linguistic aggregation operators for linguistic decision making based on the Dempster-Shafer theory of evidence. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2010, 18, 287–304. [Google Scholar] [CrossRef]
  9. Zhai, Y.L.; Xu, Z.S.; Liao, H.C. Probabilistic linguistic vector-term set and its application in group decision making with multi-granular linguistic information. Appl. Soft Comput. 2016, 49, 801–816. [Google Scholar] [CrossRef]
  10. Liao, H.C.; Xu, Z.S.; Zeng, X.J.; Merigó, J.M. Qualitative decision making with correlation coefficients of hesitant fuzzy linguistic term sets. Knowl. Based Syst. 2015, 76, 127–138. [Google Scholar] [CrossRef]
  11. Liao, H.C.; Xu, Z.S.; Zeng, X.J. Hesitant fuzzy linguistic vikor method and its application in qualitative multiple criteria decision making. IEEE Trans. Fuzzy Syst. 2015, 23, 1343–1355. [Google Scholar] [CrossRef]
  12. Rodriguez, R.M.; Martinez, L.; Herrera, F. Hesitant fuzzy linguistic term sets for decision making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  13. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  14. Liang, D.C.; Liu, D. A novel risk decision making based on decision-theoretic rough sets under hesitant fuzzy information. IEEE Trans. Fuzzy Syst. 2015, 23, 237–247. [Google Scholar] [CrossRef]
  15. Gou, X.J.; Xu, Z.S. Novel basic operational laws for linguistic terms, hesitant fuzzy linguistic term sets and probabilistic linguistic term sets. Inf. Sci. 2016, 372, 407–427. [Google Scholar] [CrossRef]
  16. He, Y.; Xu, Z.S.; Jiang, W.L. Probabilistic interval reference ordering sets in multi-criteria group decision making. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2017, 25, 189–212. [Google Scholar] [CrossRef]
  17. Wu, Z.B.; Xu, J.C. Possibility distribution-based approach for MAGDM with hesitant fuzzy linguistic information. IEEE Trans. Cybern. 2016, 46, 694–705. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, Y.X.; Xu, Z.S.; Wang, H.; Liao, H.C. Consistency-based risk assessment with probabilistic linguistic preference relation. Appl. Soft Comput. 2016, 49, 817–833. [Google Scholar] [CrossRef]
  19. Zhou, W.; Xu, Z.S. Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment. Fuzzy Optim. Decis. Mak. 2016. [Google Scholar] [CrossRef]
  20. Katz, D.A. Faculty salaries, promotions and productivity at a large University. Am. Econ. Rev. 1973, 63, 469–477. [Google Scholar]
  21. Bryson, N.; Mobolurin, A. An action learning evaluation procedure for multiple criteria decision making problems. Eur. J. Oper. Res. 1995, 96, 379–386. [Google Scholar] [CrossRef]
  22. Merigó, J.M.; Gil-Lafuente, A.M.; Zhou, L.G.; Chen, H.Y. Induced and linguistic generalized aggregation operators and their application in linguistic group decision making. Group Decis. Negot. 2012, 21, 531–549. [Google Scholar] [CrossRef]
  23. Zhang, Z.M.; Wu, C. Hesitant fuzzy linguistic aggregation operators and their applications to multiple attribute group decision making. J. Intell. Fuzzy Syst. 2014, 26, 2185–2202. [Google Scholar]
Table 1. Decision matrix A 1 provided by d 1 .
Table 1. Decision matrix A 1 provided by d 1 .
c 1 c 2 c 3
x 1 s 8 s 6 s 6
x 2 s 6 s 7 s 7
x 3 s 5 s 8 s 7
x 4 s 7 s 4 s 6
x 5 s 8 s 6 s 7
Table 2. Decision matrix A 2 provided by d 2 .
Table 2. Decision matrix A 2 provided by d 2 .
c 1 c 2 c 3
x 1 s 6 s 8 s 5
x 2 s 5 s 6 s 7
x 3 s 7 s 6 s 7
x 4 s 8 s 6 s 7
x 5 s 8 s 7 s 6
Table 3. Decision matrix A 3 provided by d 3 .
Table 3. Decision matrix A 3 provided by d 3 .
c 1 c 2 c 3
x 1 s 7 s 8 s 6
x 2 s 4 s 5 s 6
x 3 s 8 s 7 s 6
x 4 s 7 s 5 s 8
x 5 s 6 s 7 s 6
Table 4. Decision matrix A 4 provided by d 4 .
Table 4. Decision matrix A 4 provided by d 4 .
c 1 c 2 c 3
x 1 s 6 s 7 s 6
x 2 s 8 s 7 s 7
x 3 s 7 s 6 s 8
x 4 s 5 s 7 s 6
x 5 s 5 s 6 s 5
Table 5. The probabilistic linguistic decision matrix R.
Table 5. The probabilistic linguistic decision matrix R.
c 1 c 2 c 3
x 1 s 8 ( 0.25 ) , s 6 ( 0.5 ) , s 7 ( 0.25 ) s 6 ( 0.25 ) , s 8 ( 0.5 ) , s 7 ( 0.25 ) s 6 ( 0.75 ) , s 5 ( 0.25 )
x 2 s 6 ( 0.25 ) , s 5 ( 0.25 ) , s 4 ( 0.25 ) , s 8 ( 0.25 ) s 7 ( 0.5 ) , s 6 ( 0.25 ) , s 5 ( 0.25 ) s 7 ( 0.75 ) , s 6 ( 0.25 )
x 3 s 5 ( 0.25 ) , s 7 ( 0.5 ) , s 8 ( 0.25 ) s 8 ( 0.25 ) , s 6 ( 0.5 ) , s 7 ( 0.25 ) s 7 ( 0.5 ) , s 6 ( 0.25 ) , s 8 ( 0.25 )
x 4 s 7 ( 0.5 ) , s 8 ( 0.25 ) , s 5 ( 0.25 ) s 4 ( 0.25 ) , s 6 ( 0.25 ) , s 5 ( 0.25 ) , s 7 ( 0.25 ) s 6 ( 0.5 ) , s 7 ( 0.25 ) , s 8 ( 0.25 )
x 5 s 8 ( 0.5 ) , s 6 ( 0.25 ) , s 5 ( 0.25 ) s 6 ( 0.5 ) , s 7 ( 0.5 ) s 7 ( 0.25 ) , s 6 ( 0.5 ) , s 5 ( 0.25 )
Table 6. The normalized probabilistic linguistic decision matrix.
Table 6. The normalized probabilistic linguistic decision matrix.
c 1 c 2 c 3
x 1 s 6 ( 0.5 ) , s 8 ( 0.25 ) , s 7 ( 0.25 ) , s 6 ( 0 ) s 8 ( 0.5 ) , s 7 ( 0.25 ) , s 6 ( 0.25 ) , s 6 ( 0 ) s 6 ( 0.75 ) , s 5 ( 0.25 ) , s 5 ( 0 ) , s 5 ( 0 )
x 2 s 8 ( 0.25 ) , s 6 ( 0.25 ) , s 5 ( 0.25 ) , s 4 ( 0.25 ) s 7 ( 0.5 ) , s 6 ( 0.25 ) , s 5 ( 0.25 ) , s 5 ( 0 ) s 7 ( 0.75 ) , s 6 ( 0.25 ) , s 6 ( 0 ) , s 6 ( 0 )
x 3 s 7 ( 0.5 ) , s 8 ( 0.25 ) , s 5 ( 0.25 ) , s 5 ( 0 ) s 6 ( 0.5 ) , s 8 ( 0.25 ) , s 7 ( 0.25 ) , s 6 ( 0 ) s 7 ( 0.5 ) , s 8 ( 0.25 ) , s 6 ( 0.25 ) , s 6 ( 0 )
x 4 s 7 ( 0.5 ) , s 8 ( 0.25 ) , s 5 ( 0.25 ) , s 5 ( 0 ) s 7 ( 0.25 ) , s 6 ( 0.25 ) , s 5 ( 0.25 ) , s 4 ( 0.25 ) s 6 ( 0.5 ) , s 8 ( 0.25 ) , s 7 ( 0.25 ) , s 6 ( 0 )
x 5 s 8 ( 0.5 ) , s 6 ( 0.25 ) , s 5 ( 0.25 ) , s 5 ( 0 ) s 7 ( 0.5 ) , s 6 ( 0.5 ) , s 6 ( 0 ) , s 6 ( 0 ) s 6 ( 0.5 ) , s 7 ( 0.25 ) , s 5 ( 0.25 ) , s 5 ( 0 )
Table 7. The decision results of different methods.
Table 7. The decision results of different methods.
MethodRank
Aggregation-based method of Ref. [6] x 3 > x 1 > x 5 > x 2 > x 4
The method with HFLWA of Ref. [23] x 3 > x 1 > x 5 > x 2 = x 4
The method with HFLWG of Ref. [23] x 1 > x 2 > x 5 > x 3 > x 4
Max lower operator of Ref. [12] x 3 > x 2 = x 5 = x 4 > x 1
ILGCIA with group decision making of Ref. [22] x 3 > x 2 > x 1 > x 4 > x 5
Our proposed method with WPLPA x 3 > x 1 > x 5 > x 4 > x 2
Our proposed method with WPLPG x 1 > x 3 > x 5 > x 2 > x 4

Share and Cite

MDPI and ACS Style

Kobina, A.; Liang, D.; He, X. Probabilistic Linguistic Power Aggregation Operators for Multi-Criteria Group Decision Making. Symmetry 2017, 9, 320. https://doi.org/10.3390/sym9120320

AMA Style

Kobina A, Liang D, He X. Probabilistic Linguistic Power Aggregation Operators for Multi-Criteria Group Decision Making. Symmetry. 2017; 9(12):320. https://doi.org/10.3390/sym9120320

Chicago/Turabian Style

Kobina, Agbodah, Decui Liang, and Xin He. 2017. "Probabilistic Linguistic Power Aggregation Operators for Multi-Criteria Group Decision Making" Symmetry 9, no. 12: 320. https://doi.org/10.3390/sym9120320

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop