Next Article in Journal
Material Geometry of Binary Composites
Next Article in Special Issue
A Decision Analysis Model for the Brand Experience of Branded Apps Using Consistency Fuzzy Linguistic Preference Relations
Previous Article in Journal
A Control Based Mathematical Model for the Evaluation of Intervention Lines in COVID-19 Epidemic Spread: The Italian Case Study
Previous Article in Special Issue
A New Representation of Semiopenness of L-fuzzy Sets in RL-fuzzy Bitopological Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extension TOPSIS Method Based on the Decision Maker’s Risk Attitude and the Adjusted Probabilistic Fuzzy Set

1
Department of Applied Statistics, Hunan University of Science and Technology, Xiangtan 411201, China
2
Department of Mathematics and Statistics, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(5), 891; https://doi.org/10.3390/sym13050891
Submission received: 19 April 2021 / Revised: 3 May 2021 / Accepted: 13 May 2021 / Published: 17 May 2021
(This article belongs to the Special Issue Research on Fuzzy Logic and Mathematics with Applications)

Abstract

:
The paper studies an extension TOPSIS method with the adjusted probabilistic linguistic fuzzy set in which the decision maker’s behavior tendency is considered. Firstly, we propose a concept of probabilistic linguistic q-rung orthopair set (PLQROS) based on the probability linguistic fuzzy set (PLFS) and linguistic q-rung orthopair set (LQROS). The operational laws are introduced based on the transformed probabilistic linguistic q-rung orthopair sets (PLQROSs) which have the same probability. Through this adjustment method, the irrationality of the existing methods in the aggregation process is avoided. Furthermore, we propose a comparison rule of PLQROS and the aggregated operators. The distance measure of PLQROSs is also defined, which can deal with the symmetric information in multi-attribute decision making problems. Considering that the decision maker’s behavior has a very important impact on decision-making results, we propose a behavioral TOPSIS decision making method for PLQROS. Finally, we apply the practical problem of investment decision to demonstrate the validity of the extension TOPSIS method, and the merits of the behavior decision method is testified by comparing with the classic TOPSIS method. The sensitivity analysis results of decision-maker’s behavior are also given.

1. Introduction

There is much uncertainty in decision making problems. It is not easy to describe the evaluation information with accurate numerical value, and they can only be described with linguistic values. Zadeh [1,2,3] defined the linguistic term set (LTS) and applied it to express the qualitative evaluation. For example, the commonly used seven valued LTS is in the form of S = { s 0 : g r e a t d i s t a s t e , s 1 : d i s t a s t e , s 2 : a b i t d i s t a s t e , s 3 : g e n e r a l l y , s 4 : a b i t f a v o r i t e , s 5 : f a v o r i t e , s 6 : g r e a t f a v o r i t e } , it can be used to describe how much the decision maker likes the object. However, when the decision maker hesitates about the preference of the evaluation object, the single linguistic term is no longer describe such information. So Rodríguez et al. [4] presented the hesitant fuzzy linguistic term set (HFLTS), each of elements is a collection of linguistic terms. For example, the decision maker thinks that the audience’s opinion of the program is “favorite” or “great favorite”, the evaluation about the opinion of the program can be only expressed in the form of { s 5 , s 6 } . Since the HFLTS was put forward, some extensions of HFLTS were developed and applied in many fields [5,6,7,8,9,10,11]. Beg and Rashid [5] proposed the TOPSIS method of HFLTS and applied it to sort the alternative, Liao et al. [6] defined the preference relation of HFLTS, Liu et al. [7] applied the HFLTSs to the generalized TOPSIS method and presented a new similarity measure of HFLTS. In these studies however, all the linguistic terms in HFLTS have the same weights, it rarely happens in reality. In fact, the decision makers may have different degrees to the possible linguistic evaluations. Therefore, Pang et al. [12] developed the HFLTS to the PLFS through adding the probabilities to each element. For example, the decision maker believes that the possibility of favorite to the program is 0.4 and the possibility of great favorite to the program is 0.6, then the above evaluation can be represented as { s 5 ( 0.4 ) , s 6 ( 0.6 ) } .
In the aboved LTS, they only describe the membership degree of elements. To improve the range of their application, Chen [13] defined the linguistic intuitionistic fuzzy set (LIFS) L I = { x i , s θ 1 ( x i ) , s ϕ 1 ( x i ) | x i X } , where s θ 1 ( x i ) , s ϕ 1 ( x i ) S ( S = { s i | s 0 s i s 2 τ } ) , the membership s θ 1 ( x i ) and the non-membership s ϕ 1 ( x i ) satisfy the following condition: 0 θ 1 + ϕ 1 2 τ . Furthermore, Garg [14] defined the linguistic Pythagorean fuzzy set (LPFS) L P = { x i , s θ 2 ( x i ) , s ϕ 2 ( x i ) | x i X } , where s θ 2 ( x i ) , s ϕ 2 ( x i ) S , the following condition of the membership s θ 2 ( x i ) and the non-membership s ϕ 2 ( x i ) must be satisfied: 0 ( θ 2 ) 2 + ( ϕ 2 ) 2 ( 2 τ ) 2 , its advantage is it has a wider range of uncertainty than LIFS. Furthermore, in order to better describe the uncertainty in decision making problems, Liu et al. [7] proposed the LQROS L Q = { x i , s θ ( x i ) , s ϕ ( x i ) | x i X } based on the q-rung orthopair fuzzy set (QROFS) [15], where the following condition of the membership s θ ( x i ) and the non-membership s ϕ ( x i ) should be met: 0 ( θ ) q + ( ϕ ) q ( 2 τ ) q ( q 1 ) . Obviously, when q = 1 or 2, the LQROS is reduced to LIFS or LPFS, respectively. Although the LQROS extends the scope of information representation, it cannot describe the following evaluation information. For the given LTS S = { s 0 : e x t r e m e s l o w l y , s 1 : s l o w l y , s 2 : s l i g h t l y s l o w l y , s 3 : g e n e r a l l y , s 4 : s l i g h t l y h i g h , s 5 : h i g h , s 6 : e x t r e m e h i g h } , one expert believes that 30% possibility of profit from the investment in the project is high, and 70% possibility of profit from the investment in the project is extreme high. While the other expert may believe that 10% possibility of not making a profit is extreme slowly and 90% possibility of not making a profit is slightly slowly. Up to now, we cannot apply the existing LTS to describe the above evaluation information. Motivated by this, we introduce the PLQROS by integrating the LQROS and the probabilistic fuzzy set. Then the above information can be represented by Q s ( p ) = { s 5 ( 0.3 ) , s 6 ( 0.7 ) } , { s 0 ( 0.1 ) , s 2 ( 0.9 ) } , the detailed definition is given in Section 3.1.
On the other side, the TOPSIS is a classical method to handle the multiple criteria decision making problems. Since it was introduced by Hwang and Yoon [16], there were many literatures on TOPSIS method, we can refer to [17,18,19,20,21,22]. The TOPSIS method is a useful technique for choosing an alternative that is closet to the best alternative and farthest from the worst alternative simultaneously. Furthermore, Yoon and Kim [23] proposed a behavioral TOPSIS method that incorporates the gain and loss in behavioral economics, which makes the decision results more reasonable. As all we know, there is no related research study the behavioral TOPSIS method in uncertain decision environments problems. Inspired by this, we study the TOPSIS method which consider the decision maker’s risk attitude and the adjusted probabilistic fuzzy set, the main contributions of the paper are given as follows:
(1)
The operational laws of PLQROS are given based on the adjusted PLQROS with the same probability, which can avoid the unreasonable calculation and improve the adaptability of PLQROS in reality.
(2)
The new aggregation operators and distance measures between PLQROSs are presented, which can represent the differences between PLQROSs and deal with the symmetry information.
(3)
The behavioral TOPSIS method is introduced into the uncertain multi-attribute decision making process, which changes the behavioral TOPSIS method only used in the deterministic environment.
The remainder of paper is organized as follows: in the second section, some related concepts are reviewed. In the third section, we introduce the operational laws of PLQROSs, the aggregated operators and distance measures of PLQROSs, and we also give their corresponding properties. In Section 4, the process steps of the behavioral decision algorithm are given. In Section 5, a practical example is utilized to prove the availability of the extension TOPSIS method. Furthermore, the sensitivity analysis of the behavioral factors of decision maker’s risk attitude is provided. Finally, we made a summary of the paper and expanded the future studies.

2. Preliminaries

In order to define the PLQROS, we introduce some concepts of LTS, PLFS, QROFS and LQROS. Throughout the paper, assume X = { x 1 , x 2 , , x m } to be a non-empty and finite set.
In the uncertain decision making environment, the experts applied the LTS to make a qualitative description, it is defined as follows:
Definition 1
([1]). Assume S = s α | α = 0 , 1 , , 2 g is a finite set, where s α is a linguistic term and g is a natural number, the LTS S should satisfy two properties:
(1) 
if α β , then s α s β ;
(2) 
s α = n e g ( s β ) , where α + β = 2 g .
In order to describe the decision information more objective, Xu [24] extended the LTS S to a continuous LTS S ¯ = { s α | α [ 0 , ρ ] } , where ρ ( ρ > 2 g ) is a natural number.
The PLFS is regarded as an extension of HFLTS, which considers the elements of LTSs with different weights, the PLFS can be denoted as:
Definition 2
([12]). Assume X = { x 1 , x 2 , , x m } and S = { s 0 , s 1 , , s 2 g } is a LTS, the PLFS Z s ( r ) in X is defined as:
Z s ( r ) = { x j , z s ( r ) ( x j ) | x j X } ,
where z s ( r ) ( x j ) = { s j ( u ) ( r ( u ) ) | s j ( u ) S , r ( u ) 0 , u = 1 , 2 , , U ; u = 1 U r ( u ) 1 } , s j ( u ) is the linguistic term and r ( u ) is the corresponding probability of s j ( u ) , U is the number of linguistic terms s j ( u ) .
Next, we introduce the concept of QROFS as follows:
Definition 3
([15]). Assume X = { x 1 , x 2 , , x m } , the QROFS Q is represented as:
Q = { x j , μ Q ( x j ) , ν Q ( x j ) | x j X } , q 1 ,
where the membership μ Q ( x j ) ( 0 μ Q ( x j ) 1 ) and the non-membership ν Q ( x j ) ( 0 ν Q ( x j ) 1 ) satisfy 0 ( μ Q ( x j ) ) q + ( ν Q ( x j ) ) q 1 , π Q ( x j ) = 1 ( μ Q ( x j ) ) q ( ν Q ( x j ) ) q q is the indeterminacy degree of QROFS Q.
If X = {x}, the QROFS Q is reduced to a q-rung orthopair fuzzy number (QROFN) Q μ Q , ν Q .
Remark 1.
If q = 1 or 2, the QROFS Q is degenerated to an intuitionistic fuzzy set (IFS) or a Pythagorean fuzzy set (PFS).
In some realistic decision making problems, the set should be described qualitatively. So we review the concept of LQROS as follows:
Definition 4
([25]). Assume X = { x 1 , x 2 , , x m } , S ¯ = { s α | α [ 0 , ρ ] } ( ρ > 2 g ) is a LTS, the LQROS Y is represented as:
Y = { x j , s θ ( x j ) , s ϕ ( x j ) | x j X } ,
where the membership s θ ( x j ) and the non-membership s ϕ ( x j ) satisfy 0 θ j q + ϕ j q ρ q ( q 1 ) , the indeterminacy degree π Y ( x j ) = s ρ q θ j q ϕ j q q .
If X = {x}, the LQROS Y is degenerated to a linguistic q-rung orthopair number (LQRON) Y s θ , s ϕ .
Definition 5
([25]). Let y 1 = s θ 1 , s ϕ 1 and y 2 = s θ 2 , s ϕ 2 be two LQRONs, s θ ι , s ϕ ι S ¯ [ 0 , ρ ] ( ι = 1 , 2 ) , ϵ > 0 , the algorithms of the LQRONs can be expressed as follows:
(a) 
y 1 y 2 = s ( ( θ 1 ) q + ( θ 2 ) q ( θ 1 θ 2 ρ ) q ) 1 q , s ϕ 1 ϕ 2 ρ ;
(b) 
y 1 y 2 = s θ 1 θ 2 ρ , s ( ( ϕ 1 ) q + ( ϕ 2 ) q ( ϕ 1 ϕ 2 ρ ) q ) 1 q ;
(c) 
ϵ y 1 = s ( ρ q ρ q ( 1 ( θ 1 ) q ρ q ) ϵ ) 1 q , s ρ ( ϕ 1 ρ ) ϵ ;
(d) 
y 1 ϵ = s ρ ( θ 1 ρ ) ϵ , s ( ρ q ρ q ( 1 ( ϕ 1 ) q ρ q ) ϵ ) 1 q .

3. The Proposed Probabilistic Fuzzy Set

Now we propose a new probabilistic fuzzy set—PLQROS, which not only allows the experts to express evaluation information with multiple linguistic terms but contains the possibility of each linguistic terms. The difficulty is how to define the operational laws of PLQROS reasonably when the corresponding probability distributions are different.

3.1. The Basic Definition of PLQROS

Definition 6.
Let X = { x 1 , x 2 , , x m } and S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a continuous LTS, the PLQROS P L s ( r ) in X can be defined as:
P L s ( r ) = { x j , H s ( r ^ ) ( x j ) , G s ( r ˜ ) ( x j ) | x j X } ,
where H s ( r ^ ) ( x j ) = { s θ j ( u ) ( r ^ ( u ) ) | s θ j ( u ) S ¯ [ 0 , ρ ] , r ^ ( u ) 0 , u = 1 U r ^ ( u ) 1 } is the membership and G s ( r ˜ ) ( x j ) = { s ϕ j ( v ) ( r ˜ ( v ) ) | s ϕ j ( v ) S ¯ , r ˜ ( v ) 0 , v = 1 V r ˜ ( v ) 1 } is the non-membership, respectively. For any x j X , they satisfy with: 0 ( max u = 1 U { θ j ( u ) } ) q + ( max v = 1 V { ϕ j ( v ) } ) q ρ q ( q 1 ) .
If X = {x}, the PLQROS P L s ( r ) is degenerated to a PLQRON p l s ( r ) { s θ ( u ) ( r ^ ( u ) ) } , { s ϕ ( v ) ( r ˜ ( v ) ) } , where s θ ( u ) , s ϕ ( v ) S ¯ [ 0 , ρ ] , u = 1 U r ^ ( u ) 1 and v = 1 V r ˜ ( v ) 1 .
Example 1.
Let S = { s 0 : v e r y s l o w l y , s 1 : s l o w l y , s 2 : s l i g h t l y s l o w l y , s 3 : g e n e r a l l y , s 4 : s l i g h t l y f a s t , s 5 : f a s t , s 6 : v e r y f a s t } . Two groups of experts inspected the development of the company, one group may think that “the speed of company development is slightly slowly with 100% possibility, with 40% probability that it is not slightly slowly, with 40% probability that the speed of company development is not generally and with 20% probability that the speed of company development is not slightly fast”. The other group think that “with 20% probability that the speed of company development is slowly, with 60% probability that the speed of company development is slightly slowly and with 20% probability that it is generally, with 50% probability that it is not slightly fast and with 50% probability that it is not very fast”. Then the above evaluation information can be denoted as p l s 1 ( r ) = { s 2 ( 1 ) } , { s 2 ( 0.4 ) , s 3 ( 0.4 ) , s 4 ( 0.2 ) } and p l s 2 ( r ) = { s 1 ( 0.2 ) , s 2 ( 0.6 ) , s 3 ( 0.2 ) } , { s 5 ( 0.5 ) , s 6 ( 0.5 ) } .
According to Example 1, we can see that the probabilities and numbers of elements in p l s 1 ( r ) and p l s 2 ( r ) are not same, the general operations on PLFSs multiply the probabilities of corresponding linguistic terms directly, which may cause the unreasonable result. Therefore, Wu et al. [26] presented a method to modify the probabilities of linguistic terms to be same, which is given as follows:
Let S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a continuous LTS, p l s 1 ( r ) = { s θ 1 ( u ) ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ( r ˜ ( v ) ) } | u = 1 , 2 , , U 1 ; v = 1 , 2 , , V 1 and p l s 2 ( r ) = { s θ 2 ( u ) ( r ^ ( u ) ) } , { s ϕ 2 ( v ) ( r ˜ ( v ) ) } | u = 1 , 2 , , U 2 ; v = 1 , 2 , , V 2 are two PLQRONs. We adjust the probability distributions of p l s 1 ( r ) and p l s 2 ( r ) to be same, respectively. That is to say, p l s 1 ( r ) = { s θ 1 ( k ) ( r ^ ( k ) ) } , { s ϕ 1 ( b ) ( r ˜ ( b ) ) } | k = 1 , 2 , , K ; b = 1 , 2 , , B and p l s 2 ( r ) = { s θ 2 ( k ) ( r ^ ( k ) ) } , { s ϕ 2 ( b ) ( r ˜ ( b ) ) } | k = 1 , 2 , , K ; b = 1 , 2 , , B . Applying the method of Wu et al. [26] to adjust the PLQRONs, the linguistic terms and the sum of probabilities of each linguistic term set are not changed, which means that the adjustment method does not result in the loss of evaluation information.
Example 2.
Let S = { s α | α = 0 , 1 , 2 , , 6 } , p l s 1 ( r ) = { s 2 ( 1 ) } , { s 3 ( 0.4 ) , s 4 ( 0.4 ) , s 6 ( 0.2 ) } and p l s 2 ( r ) = { s 1 ( 0.2 ) , s 2 ( 0.6 ) , s 3 ( 0.2 ) } , { s 5 ( 0.5 ) , s 6 ( 0.5 ) } be two PLQRONs, the adjusted PLQRONs are p l s 1 ( r ) = { s 2 ( 0.2 ) , s 2 ( 0.6 ) , s 2 ( 0.2 ) } , { s 3 ( 0.4 ) , s 4 ( 0.1 ) , s 4 ( 0.3 ) , s 6 ( 0.2 ) } and p l s 2 ( r ) = { s 1 ( 0.2 ) , s 2 ( 0.6 ) , s 3 ( 0.2 ) } , { s 5 ( 0.4 ) , s 5 ( 0.1 ) , s 6 ( 0.3 ) , s 6 ( 0.2 ) } , respectively. The adjustment process is shown in Figure 1.

3.2. Some Properties for PLQRONs

Firstly, we apply the adjustment method to adjust the probabilistic linguistic terms with same probability, which can overcome the defects that may occur in process of aggregation. Then, we propose the operation rules of the adjusted PLQRONs, and their properties.
Definition 7.
Let S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a LTS, p l s 1 ( r ) = { s θ 1 ( u ) ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ( r ˜ ( v ) ) } and p l s 2 ( r ) = { s θ 2 ( u ) ( r ^ ( u ) ) } , { s ϕ 2 ( v ) ( r ˜ ( v ) ) } ( u = 1 , 2 , , U ; v = 1 , 2 , , V ) are two adjusted PLQRONs, where θ ι ( u ) , ϕ ι ( v ) ( ι = 1 , 2 ) are the subscript of s θ ι ( u ) , s ϕ ι ( v ) ( ι = 1 , 2 ) , η > 0 , the operational laws of the PLQRONs can be expressed as follows:
(a) 
n e g ( p l s 1 ( r ) ) = { s ϕ 1 ( v ) ( r ˜ ( v ) ) } , { s θ 1 ( u ) ( r ^ ( u ) ) } ;
(b) 
p l s 1 ( r ) p l s 2 ( r ) = { s ( ( θ 1 ( u ) ) q + ( θ 2 ( u ) ) q ( ( θ 1 ( u ) ) ( θ 2 ( u ) ) ρ ) q ) 1 q ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ϕ 2 ( v ) ρ ( r ˜ ( v ) ) } ;
(c) 
p l s 1 ( r ) p l s 2 ( r ) = { s θ 1 ( u ) θ 2 ( u ) ρ ( r ^ ( u ) ) } , { s ( ( ϕ 1 ( v ) ) q + ( ϕ 2 ( v ) ) q ( ( ϕ 1 ( v ) ) ( ϕ 2 ( v ) ) ρ ) q ) 1 q ( r ˜ ( v ) ) } ;
(d) 
η p l s 1 ( r ) = { s ( ρ q ρ q ( 1 ( θ 1 ( u ) ) q ρ q ) η ) 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ρ ) η ( r ˜ ( v ) ) } ;
(e) 
( p l s 1 ( r ) ) η = { s ρ ( θ 1 ( u ) ρ ) η ( r ^ ( u ) ) } , { s ( ρ q ρ q ( 1 ( ϕ 1 ( v ) ) q ρ q ) η ) 1 q ( r ˜ ( v ) ) } .
Example 3.
Let S = { s α | α = 0 , 1 , 2 , , 6 } , p l s 1 ( r ) = { s 4 ( 0.4 ) , s 5 ( 0.6 ) } , { s 1 ( 0.7 ) , s 2 ( 0.3 ) } and p l s 2 ( r ) = { s 3 ( 0.2 ) , s 4 ( 0.5 ) , s 5 ( 0.3 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) } be two PLQRONs, the modified PLQRONs are p l s 1 ( r ) = { s 4 ( 0.2 ) , s 4 ( 0.2 ) , s 5 ( 0.3 ) , s 5 ( 0.3 ) } , { s 1 ( 0.5 ) , s 1 ( 0.2 ) , s 2 ( 0.3 ) } and p l s 2 ( r ) = { s 3 ( 0.2 ) , s 4 ( 0.2 ) , s 4 ( 0.3 ) , s 5 ( 0.3 ) } , { s 3 ( 0.5 ) , s 4 ( 0.2 ) , s 4 ( 0.3 ) } , The adjustment process is shown in Figure 2. Let η = 0.5 and q = 3 , then we have
n e g ( p l s 1 ( r ) ) = { s 1 ( 0.5 ) , s 1 ( 0.2 ) , s 2 ( 0.3 ) } , { s 4 ( 0.2 ) , s 4 ( 0.2 ) , s 5 ( 0.3 ) , s 5 ( 0.3 ) } ; p l s 1 ( r ) p l s 2 ( r ) = { s 4.3621 ( 0.2 ) , s 4.7774 ( 0.2 ) , s 5.3364 ( 0.3 ) , s 5.6217 ( 0.3 ) } , { s 0.5 ( 0.5 ) , s 0.6667 ( 0.2 ) , s 1.3333 ( 0.3 ) } ; p l s 1 ( r ) p l s 2 ( r ) = { s 2 ( 0.2 ) , s 2.6667 ( 0.2 ) , s 3.3333 ( 0.3 ) , s 4.1667 ( 0.3 ) } , { s 3.0321 ( 0.5 ) , s 4.0146 ( 0.2 ) , s 4.114 ( 0.3 ) } ; 0.5 p l s 1 ( r ) = { s 3.2649 ( 0.2 ) , s 3.2649 ( 0.2 ) , s 4.2321 ( 0.3 ) , s 4.2321 ( 0.3 ) } , { s 2.4495 ( 0.5 ) , s 2.4495 ( 0.2 ) , s 3.4641 ( 0.3 ) } ; ( p l s 1 ( r ) ) 0.5 = { s 4.899 ( 0.2 ) , s 4.899 ( 0.2 ) , s 5.4772 ( 0.3 ) , s 5.4772 ( 0.3 ) } , { s 0.794 ( 0.5 ) , s 0.794 ( 0.2 ) , s 1.5924 ( 0.3 ) } .
Theorem 1.
Let p l s 1 ( r ) = { s θ 1 ( u ) ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ( r ˜ ( v ) ) } and p l s 2 ( r ) = { s θ 2 ( u ) ( r ^ ( u ) ) } , { s ϕ 2 ( v ) ( r ˜ ( v ) ) } ( u = 1 , 2 , , U ; v = 1 , 2 , , V ) be any two adjusted PLQRONs, η , η 1 , η 2 > 0 , then
(1) 
p l s 1 ( r ) p l s 2 ( r ) = p l s 2 ( r ) p l s 1 ( r ) ;
(2) 
p l s 1 ( r ) p l s 2 ( r ) = p l s 2 ( r ) p l s 1 ( r ) ;
(3) 
η ( p l s 1 ( r ) p l s 2 ( r ) ) = η p l s 1 ( r ) η p l s 2 ( r ) ;
(4) 
η 1 p l s 1 ( r ) η 2 p l s 1 ( r ) = ( η 1 + η 2 ) p l s 1 ( r ) ;
(5) 
( p l s 1 ( r ) ) η 1 ( p l s 1 ( r ) ) η 2 = ( p l s 1 ( r ) ) η 1 + η 2 ;
(6) 
( p l s 1 ( r ) ) η ( p l s 2 ( r ) ) η = ( p l s 1 ( r ) p l s 2 ( r ) ) η .
Here we prove the property (1) and (3), other properties proof process are similar, we omit them.
(1)
By Definition 7, we have
p l s 1 ( r ) p l s 2 ( r ) = { s ( ( θ 1 ( u ) ) q + ( θ 2 ( u ) ) q ( ( θ 1 ( u ) ) ( θ 2 ( u ) ) ρ ) q ) 1 q ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ϕ 2 ( v ) ρ ( r ˜ ( v ) ) } = { s ( ( θ 2 ( u ) ) q + ( θ 1 ( u ) ) q ( ( θ 2 ( u ) ) ( θ 1 ( u ) ) ρ ) q ) 1 q ( r ^ ( u ) ) } , { s ϕ 2 ( v ) ϕ 1 ( v ) ρ ( r ˜ ( v ) ) } = p l s 2 ( r ) p l s 1 ( r ) .
Therefore p l s 1 ( r ) p l s 2 ( r ) = p l s 2 ( r ) p l s 1 ( r ) is obtained.
(3)
By Definition 7, we can get
η ( p l s 1 ( r ) p l s 2 ( r ) ) = η { s ( ( θ 1 ( u ) ) q + ( θ 2 ( u ) ) q ( θ 1 ( u ) θ 2 ( u ) ρ ) q ) 1 q ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ϕ 2 ( v ) ρ ( r ˜ ( v ) ) } = { s ( ρ q ρ q ( 1 ( θ 1 ( u ) ) q + ( θ 2 ( u ) ) q ( ( θ 1 ( u ) ) ( θ 2 ( u ) ) ρ ) q ρ q ) η ) 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ϕ 2 ( v ) ρ 2 ) η ( r ˜ ( v ) ) } .
Moreover, since
η ( p l s 1 ( r ) ) = { s ( ρ q ρ q ( 1 ( θ 1 ( u ) ) q ρ q ) η ) 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ρ ) η ( r ˜ ( v ) ) } , η ( p l s 2 ( r ) ) = { s ( ρ q ρ q ( 1 ( θ 2 ( u ) ) q ρ q ) η ) 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 2 ( v ) ρ ) η ( r ˜ ( v ) ) } ,
let ϖ = ( ρ q ρ q ( 1 ( θ 1 ( u ) ) q ρ q ) η ) 1 q and χ = ( ρ q ρ q ( 1 ( θ 2 ( u ) ) q ρ q ) η ) 1 q , the above formulas can be denoted as:
η p l s 1 ( r ) η p l s 2 ( r ) = { s ( ϖ q + χ q ( ϖ · χ ρ ) q ) 1 / q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ϕ 2 ( v ) ρ 2 ) η ( r ˜ ( v ) ) } = { s { ρ q ρ q · [ ( 1 ( θ 1 ( u ) ) q ρ q ) η + ( 1 ( θ 2 ( u ) ) q ρ q ) η ] ( 1 ( 1 ( θ 1 ( u ) ) q ρ q ) η ) · [ ρ q ρ q ( 1 ( θ 2 ( u ) ) q ρ q ) η ] } 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ϕ 2 ( v ) ρ 2 ) η ( r ˜ ( v ) ) } = { s { ρ q ρ q · [ ( 1 ( θ 1 ( u ) ) q ρ q ) η + ( 1 ( θ 2 ( u ) ) q ρ q ) η ] + ρ q [ ( 1 ( θ 1 ( u ) ) q ρ q ) η + ( 1 ( θ 2 ( u ) ) q ρ q ) η ( 1 ( θ 1 ( u ) ) q ρ q ) η · ( 1 ( θ 2 ( u ) ) q ρ q ) η ] } 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ϕ 2 ( v ) ρ 2 ) η ( r ˜ ( v ) ) } = { s { ρ q ρ q · [ ( 1 ( θ 1 ( u ) ) q ρ q ) η · ( 1 ( θ 2 ( u ) ) q ρ q ) η ] } 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ϕ 2 ( v ) ρ 2 ) η ( r ˜ ( v ) ) } = { s ( ρ q ρ q ( 1 ( θ 1 ( u ) ) q + ( θ 2 ( u ) ) q ( ( θ 1 ( u ) ) ( θ 2 ( u ) ) ρ ) q ρ q ) η ) 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ 1 ( v ) ϕ 2 ( v ) ρ 2 ) η ( r ˜ ( v ) ) } = η ( p l s 1 ( r ) p l s 2 ( r ) ) .
Therefore η ( p l s 1 ( r ) p l s 2 ( r ) ) = η p l s 1 ( r ) η p l s 2 ( r ) is proved.
In order to compare the order relation of PLQROSs, we present the comparison rules as follows:
Definition 8.
Assume S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a LTS, for any adjusted PLQRON p l s ( r ) = { s θ ( u ) ( r ^ ( u ) ) } , { s ϕ ( v ) ( r ˜ ( v ) ) } , where s θ ( u ) , s ϕ ( v ) S ˜ [ 0 , ρ ] , ( u = 1 , 2 , , U ; v = 1 , 2 , , V ) , the score function of p l s ( r ) is
A ( p l s ( r ) ) = u = 1 # U θ ( θ ( u ) · r ^ ( u ) ρ ) q v = 1 # V ϕ ( ϕ ( v ) · r ˜ ( v ) ρ ) q ,
where θ ( u ) , ϕ ( v ) [ 0 , ρ ] , # U θ and # V ϕ represent the number of elements in the corresponding set, respectively.
The accuracy function of p l s ( r ) is
H ( p l s ( r ) ) = u = 1 # U θ ( θ ( u ) · r ^ ( u ) ρ ) q + v = 1 # V ϕ ( ϕ ( v ) · r ˜ ( v ) ρ ) q ,
where θ ( u ) , ϕ ( v ) [ 0 , ρ ] , # U θ and # V ϕ represent the number of elements in the corresponding set, respectively.
Theorem 2.
Let p l s 1 ( r ) = { s θ 1 ( u ) ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ( r ˜ ( v ) ) } and p l s 2 ( r ) = { s θ 2 ( u ) ( r ^ ( u ) ) } , { s ϕ 2 ( v ) ( r ˜ ( v ) ) } be two adjusted PLQRONs. A ( p l s 1 ( r ) ) and A ( p l s 2 ( r ) ) are the score function of p l s 1 ( r ) and p l s 2 ( r ) , the accuracy function of p l s 1 ( r ) and p l s 2 ( r ) are H ( p l s 1 ( r ) ) and H ( p l s 2 ( r ) ) , respectively, then the order relation of p l s 1 ( r ) and p l s 2 ( r ) are given as follows:
(1) 
If A ( p l s 1 ( r ) ) > A ( p l s 2 ( r ) ) , then p l s 1 ( r ) p l s 2 ( r ) ;
(2) 
If A ( p l s 1 ( r ) ) = A ( p l s 2 ( r ) ) , then
(a) 
If H ( p l s 1 ( r ) ) = H ( p l s 2 ( r ) ) , then p l s 1 ( r ) p l s 2 ( r ) ;
(b) 
If H ( p l s 1 ( r ) ) < H ( p l s 2 ( r ) ) , then p l s 1 ( r ) p l s 2 ( r ) ;
(c) 
If H ( p l s 1 ( r ) ) > H ( p l s 2 ( r ) ) , then p l s 1 ( r ) p l s 2 ( r ) ;
(3) 
If A ( p l s 1 ( r ) ) < A ( p l s 2 ( r ) ) , then p l s 1 ( r ) p l s 2 ( r ) .

3.3. The Aggregation Operators of PLQROSs

In order to aggregate the multi-attribute information well, we introduce the aggregation operators of PLQRONs as follows.
Definition 9.
Let S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a LTS, p l s 1 ( r ) = { s θ ι ( u ) ( r ^ ( u ) ) } , { s ϕ ι ( v ) ( r ˜ ( v ) ) } ( ι = 1 , 2 , , n ; u = 1 , 2 , , U ; v = 1 , 2 , , V ) are n adjusted PLQRONs, where s θ ι ( u ) , s ϕ ι ( v ) S ¯ [ 0 , ρ ] , the probabilistic linguistic q-rung orthopair weighted averaging (PLQROWA) operator can be expressed as:
P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = ω 1 p l s 1 ( r ) ω 2 p l s 2 ( r ) ω n p l s n ( r ) = { s ( ρ q ρ q ι = 1 n ( 1 ( θ ι ( u ) ) q ρ q ) ω ι ) 1 q ( r ^ ( u ) ) } , { s ι = 1 n ρ ( ϕ ι ( v ) ρ ) ω ι ( r ˜ ( v ) ) } ,
where ω = ( ω 1 , ω 2 , , ω n ) T is the weight vector, and it satisfies ι = 1 n ω ι = 1 ( 0 ω ι 1 ) .
Theorem 3.
Let p l s ι ( r ) = { s θ ι ( u ) ( r ^ ( u ) ) } , { s ϕ ι ( v ) ( r ˜ ( v ) ) } ( ι = 1 , 2 , , n ; u = 1 , 2 , , U ; v = 1 , 2 , , V ) be ι adjusted PLQRONs, the weight ω ι ( ι = 1 , 2 , , n ) satisfies with 0 ω ι 1 and ι = 1 n ω ι = 1 , then the properties of PLQROWA are shown as follows:
(1) 
Idempotency: if p l s ι ( r ) ( ι = 1 , 2 , , n ) are equal, i.e., p l s ι ( r ) = p l s ( r ) { s θ ( u ) ( r ^ ( u ) ) } , { s ϕ ( v ) ( r ˜ ( v ) ) } , then
P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = p l s ( r ) = { s θ ( u ) ( r ^ ( u ) ) } , { s ϕ ( v ) ( r ˜ ( v ) ) } .
(2) 
Monotonicity: let p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) and p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) be two collections of adjusted PLQRONs, for all ι, s θ ι ( u ) < s θ ι ( u ) and s ϕ ι ( v ) > s ϕ ι ( v ) , then
P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) < P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) .
(3) 
Boundedness: let s θ ι ( + ) = max u = 1 U s θ ι ( u ) , s θ ι ( ) = min u = 1 U s θ ι ( u ) , s ϕ ι ( + ) = max v = 1 V s ϕ ι ( v ) and s ϕ ι ( ) = min v = 1 V s ϕ ι ( v ) , then
{ s θ ι ( ) ( r ^ ( u ) ) } , { s ϕ ι ( + ) ( r ˜ ( v ) ) } P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) { s θ ι ( + ) ( r ^ ( u ) ) } , { s ϕ ι ( ) ( r ˜ ( v ) ) } .
(1)
For all ι , since p l s ι ( r ) = p l s ( p ) = { s θ ( u ) ( r ^ ( u ) ) } , { s ϕ ( v ) ( r ˜ ( v ) ) } , then
P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = ω 1 p l s 1 ( r ) ω 2 p l s 2 ( r ) ω n p l s n ( r ) = { s ( ρ q ρ q ι = 1 n ( 1 ( θ ι ( u ) ) q ρ q ) ω ι ) 1 q ( r ^ ( u ) ) } , { s ι = 1 n ρ ( ϕ ι ( v ) ρ ) ω ι ( r ˜ ( v ) ) } = { s ( ρ q ρ q ( 1 ( θ ι ( u ) ) q ρ q ) ι = 1 n ω ι ) 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ ι ( v ) ρ ) ι = 1 n ω ι ( r ˜ ( v ) ) } = { s ( ρ q ρ q ( 1 ( θ ( u ) ) q ρ q ) ) 1 q ( r ^ ( u ) ) } , { s ρ ( ϕ ( v ) ρ ) ( r ˜ ( v ) ) } = { s θ ( u ) ( r ^ ( u ) ) } , { s ϕ ( v ) ( r ˜ ( v ) ) } .
Therefore P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = { s θ ( u ) ( r ^ ( u ) ) } , { s ϕ ( v ) ( r ˜ ( v ) ) } is proved.
(2)
For all ι , s θ ι ( u ) < s θ ι ( u ) and s ϕ ι ( v ) > s ϕ ι ( v ) , then we have
s 1 θ ι ( u ) ρ < s 1 θ ι ( u ) ρ s ( ρ ρ ι = 1 n ( 1 ( θ ι ( u ) ρ ) q ) ω ι ) 1 q < s ( ρ ρ ι = 1 n ( 1 ( θ ι ( u ) ρ ) q ) ω ι ) 1 q , s ι = 1 n ( ϕ ι ( v ) ) ω ι > s ι = 1 n ( ϕ ι ( v ) ) ω ι .
Assume P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = p l s ( r ) and P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = p l s ( r ) , by (1), we can get
A ( p l s ( r ) ) = u = 1 # U θ ( ( ρ ρ · ι = 1 n ( 1 ( θ ( u ) ρ ) q ) ω ι ) 1 q ρ · ( r ^ ( u ) ) ) q v = 1 # V ϕ ( ( ϕ ( v ) ) ω ι ρ · r ˜ ( v ) ) q ;
A ( p l s ( r ) ) = u = 1 # U θ ( ( ρ ρ · ι = 1 n ( 1 ( θ ( u ) ρ ) q ) ω ι ) 1 q ρ · ( r ^ ( u ) ) ) q v = 1 # V ϕ ( ( ϕ ( v ) ) ω ι ρ · r ˜ ( v ) ) q .
Then we have A ( p l s ( r ) ) < A ( p l s ( r ) ) , that is p l s ( r ) < p l s ( r ) .
Therefore P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) < P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) is proved.
(3)
For all ι , s θ ι ( ) s θ ι ( u ) s θ ι ( + ) , s ϕ ι ( ) s ϕ ι ( v ) s ϕ ι ( + ) , according to the properties (1) and (2), we can easily have
{ s θ ι ( ) ( r ^ ( u ) ) } , { s ϕ ι ( + ) ( r ˜ ( u ) ) } P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) { s θ ι ( + ) ( r ^ ( u ) ) } , { s ϕ ι ( ) ( r ˜ ( v ) ) } .
Remark 2.
Especially, when ω ι = 1 n ( ι = 1 , 2 , , n ) , the PLQROWA operator is reduced to a probabilistic linguistic q-rung orthopair averaging (PLQROA) operator:
P L Q R O A ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = 1 n p l s 1 ( r ) 1 n p l s 2 ( r ) 1 n p l s n ( r ) = { s ( ρ q ρ q ι = 1 n ( 1 ( θ ι ( u ) ) q ρ q ) 1 n ) 1 q ( r ^ ( u ) ) } , { s ι = 1 n ρ ( ϕ ι ( v ) ρ ) 1 n ( r ˜ ( v ) ) } .
Definition 10.
Let S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a LTS, p l s ι ( r ) = { s θ ι ( u ) ( r ^ ( u ) ) } , { s ϕ ι ( v ) ( r ˜ ( v ) ) } ( ι = 1 , 2 , , n ; u = 1 , 2 , , U ; v = 1 , 2 , , V ) is a collection of adjusted PLQRONs, where s θ ι ( u ) , s ϕ ι ( v ) S ¯ [ 0 , ρ ] , the probabilistic linguistic q-rung orthopair weighted geometric (PLQROWG) operator is given as follows:
P L Q R O W G ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = ( p l s 1 ( r ) ) ω 1 ( p l s 2 ( r ) ) ω 2 ( p l s n ( r ) ) ω n = { s ι = 1 n ρ ( θ ι ( u ) ρ ) ω ι ( r ^ ( u ) ) } , { s ( ρ q ρ q ι = 1 n ( 1 ( ϕ ι ( v ) ) q ρ q ) ω ι ) 1 q ( r ˜ ( v ) ) } ,
where ω = ( ω 1 , ω 2 , , ω n ) T is the weight vector and satisfies with ι = 1 n ω ι = 1 ( 0 ω ι 1 ) .
Theorem 4.
The PLQROWG operator satisfies the properties in Theorem 3.
Proof. 
Because the proof is similar to Theorem 3, we omit it here. □
Remark 3.
Especially, when ω ι = 1 n ( ι = 1 , 2 , , n ) , the PLQROWG operator is degenerated into the probabilistic linguistic q-rung orthopair geometric (PLQROG) operator
P L Q R O G ( p l s 1 ( r ) , p l s 2 ( r ) , , p l s n ( r ) ) = ( p l s 1 ( r ) ) 1 n ( p l s 2 ( r ) ) 1 n ( p l s n ( r ) ) 1 n = { s ι = 1 n ρ ( θ ι ( u ) ρ ) 1 n ( r ^ ( u ) ) } , { s ( ρ q ρ q ι = 1 n ( 1 ( ϕ ι ( v ) ) q ρ q ) 1 n ) 1 q ( r ˜ ( v ) ) } .
Example 4.
Let S ¯ = { s α | 0 α 6 } be a LTS, p l s 1 ( r ) = { s 2 ( 1 ) } , { s 1 ( 0.9 ) , s 2 ( 0.1 ) } , p l s 2 ( r ) = { s 1 ( 0.2 ) , s 2 ( 0.6 ) , s 3 ( 0.2 ) } , { s 1 ( 0.9 ) , s 2 ( 0.1 ) } and p l s 3 ( r ) = { s 2 ( 0.8 ) , s 3 ( 0.2 ) } , { s 0 ( 0.3 ) , s 1 ( 0.5 ) , s 2 ( 0.2 ) } be three PLQRONs, ω = ( 0.3 , 0.5 , 0.2 ) T is the corresponding weight vector, then the calculation results of the PLQROWA and the PLQROWG are given as follows.
Firstly, we adjust the corresponding probability distributions of p l s 1 ( r ) , p l s 2 ( r ) and p l s 3 ( r ) , the adjusted PLQRONs obtained as follows:
p l s 1 ( r ) = { s 2 ( 0.2 ) , s 2 ( 0.6 ) , s 2 ( 0.2 ) } , { s 1 ( 0.3 ) , s 1 ( 0.5 ) , s 1 ( 0.1 ) , s 2 ( 0.1 ) } ;
p l s 2 ( r ) = { s 1 ( 0.2 ) , s 2 ( 0.6 ) , s 3 ( 0.2 ) } , { s 1 ( 0.3 ) , s 1 ( 0.5 ) , s 1 ( 0.1 ) , s 2 ( 0.1 ) } ;
p l s 3 ( r ) = { s 2 ( 0.2 ) , s 2 ( 0.6 ) , s 3 ( 0.2 ) } , { s 0 ( 0.3 ) , s 1 ( 0.5 ) , s 2 ( 0.1 ) , s 2 ( 0.1 ) } .
If q = 3 , according to the Formula (3), we can get
P L Q R O W A ( p l s 1 ( r ) , p l s 2 ( r ) , p l s 3 ( r ) ) = { s ( 6 3 6 3 ι = 1 3 ( 1 ( θ ι ( u ) ) 3 6 3 ) ω ι ) 1 3 ( r ^ ( u ) ) } , { s ι = 1 3 6 ( ϕ ι ( v ) 6 ) ω ι ( r ˜ ( v ) ) } = { s ( 6 3 6 3 ( 1 2 3 6 3 ) 0.5 ( 1 1 3 6 3 ) 0.5 ) 1 3 ( 0.2 ) , s ( 6 3 6 3 ( 1 2 3 6 3 ) 1 ) 1 3 ( 0.6 ) , s ( 6 3 6 3 ( 1 2 3 6 3 ) 0.3 ( 1 3 3 6 3 ) 0.7 ) 1 3 ( 0.2 ) } , { s 6 ( 1 6 ) 0.8 ( 0 6 ) 0.2 ( 0.3 ) , s 6 ( 1 6 ) 1 ( 0.5 ) , s 6 ( 1 6 ) 0.8 ( 2 6 ) 0.2 ( 0.1 ) , s 6 ( 2 6 ) 1 ( 0.1 ) } = { s 1.65 ( 0.2 ) , s 2 ( 0.6 ) , s 2.98 ( 0.2 ) } , { s 0 ( 0.3 ) , s 1 ( 0.5 ) , s 1.15 ( 0.1 ) , s 2 ( 0.1 ) } .
If q = 3 , according to the Formula (4), we can get
P L Q R O W G ( p l s 1 ( r ) , p l s 2 ( r ) , p l s 3 ( r ) ) = { s ι = 1 3 6 ( θ ι ( u ) 6 ) ω ι ( r ^ ( u ) ) } , { s ( 6 3 6 3 ι = 1 3 ( 1 ( ϕ ι ( v ) ) 3 6 3 ) ω ι ) 1 3 ( r ˜ ( v ) ) } = { s 6 ( 1 6 ) 0.5 ( 2 6 ) 0.5 ( 0.2 ) , s 6 ( 2 6 ) 1 ( 0.6 ) , s 6 ( 2 6 ) 0.2 ( 3 6 ) 0.7 ( 0.2 ) } , { s ( 6 3 6 3 ( 1 1 3 6 3 ) 0.8 ( 1 ( 0 ) 3 6 3 ) 0.2 ) 1 3 ( 0.3 ) , s ( 6 3 6 3 ( 1 1 3 6 3 ) 1 ) 1 3 ( 0.5 ) , s ( 6 3 6 3 ( 1 1 3 6 3 ) 0.8 ( 1 2 3 6 3 ) 0.2 ) 1 3 ( 0.1 ) , s ( 6 3 6 3 ( 1 2 3 6 3 ) 1 ) 1 3 ( 0.1 ) } = { s 1.41 ( 0.2 ) , s 2 ( 0.6 ) , s 2.66 ( 0.2 ) } , { s 0 ( 0.3 ) , s 1 ( 0.5 ) , s 1.34 ( 0.1 ) , s 2 ( 0.1 ) } .

3.4. Distance Measures between PLQRONs

In order to compare the differences between different alternatives, we introduced the distance measure between PLQRONs, which is an important tool to process multi-attribute decision problems. In this subsection, we first propose the distance measures between PLQRONs.
Definition 11.
Let S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a LTS, p l s 1 ( r ) = H s 1 ( r ^ ) , G s 1 ( r ˜ ) = { s θ 1 ( u ) ( r ^ ( u ) ) } , { s ϕ 1 ( v ) ( r ˜ ( v ) ) } and p l s 2 ( r ) = H s 2 ( r ^ ) , G s 2 ( r ˜ ) = { s θ 2 ( u ) ( r ^ ( u ) ) } , { s ϕ 2 ( v ) ( r ˜ ( v ) ) } ( u = 1 , 2 , , U ; v = 1 , 2 , , V ) are two adjusted PLQRONs, where s θ ι ( u ) , s ϕ ι ( v ) S ¯ [ 0 , ρ ] ( ι = 1 , 2 ) , the Hamming distance measure D d h d between p l s 1 ( r ) and p l s 2 ( r ) can be defined as:
D d h d ( p l s 1 ( r ) , p l s 2 ( r ) ) = u = 1 # U θ ( r ^ ( u ) · | ( θ 1 ( u ) ) q ( θ 2 ( u ) ) q | ρ q ) + v = 1 # V ϕ ( r ˜ ( v ) · | ( ϕ 1 ( v ) ) q ( ϕ 2 ( v ) ) q | ρ q ) ,
where q 1 , θ ι ( u ) and ϕ ι ( v ) ( ι = 1 , 2 ) are the subscripts of s θ ι ( u ) and s ϕ ι ( v ) ( ι = 1 , 2 ) , # U θ and # V ϕ represent the number of elements in H s ι ( r ^ ) and G s ι ( r ˜ ) ( ι = 1 , 2 ) , respectively.
The Euclidean distance measure D d e d between p l s 1 ( r ) and p l s 2 ( r ) can be defined as follows:
D d e d ( p l s 1 ( r ) , p l s 2 ( r ) ) = u = 1 # U θ ( r ^ ( u ) · | ( θ 1 ( u ) ) q ( θ 2 ( u ) ) q | ρ q ) 2 + v = 1 # V ϕ ( r ˜ ( v ) · | ( ϕ 1 ( v ) ) q ( ϕ 2 ( v ) ) q | ρ q ) 2 ,
where q 1 , θ ι ( u ) and ϕ ι ( v ) ( ι = 1 , 2 ) are the subscripts of s θ ι ( u ) and s ϕ ι ( v ) ( ι = 1 , 2 ) , # U θ and # V ϕ represent the number of elements in H s ι ( r ^ ) and G s ι ( r ˜ ) ( ι = 1 , 2 ) , respectively.
The generalized distance measure D d g d between p l s 1 ( r ) and p l s 2 ( r ) can be defined as:
D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) = u = 1 # U θ ( r ^ ( u ) · | ( θ 1 ( u ) ) q ( θ 2 ( u ) ) q | ρ q ) λ + v = 1 # V ϕ ( r ˜ ( v ) · | ( ϕ 1 ( v ) ) q ( ϕ 2 ( v ) ) q | ρ q ) λ λ ,
where λ > 0 , q 1 , θ ι ( u ) and ϕ ι ( v ) ( ι = 1 , 2 ) are the subscripts of s θ ι ( u ) and s ϕ ι ( v ) ( ι = 1 , 2 ) , # U θ and # V ϕ represent the number of elements in H s ι ( r ^ ) and G s ι ( r ˜ ) ( ι = 1 , 2 ) , respectively.
Remark 4.
In particular, if λ = 1 or λ = 2 , D d g d is degenerated into D d h d or D d e d , respectively.
Theorem 5.
Assume p l s 1 ( r ) and p l s 2 ( r ) are two adjusted PLQRONs, the distance measure D d g d satisfies the following properties:
(1) 
Non-negativity: 0 D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) 1 , D d g d ( p l s 1 ( r ) , p l s 1 ( r ) ) = 0 ;
(2) 
Symmetry: D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) = D d g d ( p l s 2 ( r ) , p l s 1 ( r ) ) ;
(3) 
Triangle inequality: D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) + D d g d ( p l s 2 ( r ) , p l s 3 ( p ) ) D d g d ( p l s 1 ( r ) , p l s 3 ( p ) ) .
Obviously, D d g d satisfies the property (1) and (2). The symmetry information can be expressed by the distance measure D d g d .
The proof of property (3) is given as follows:
D d g d ( p l s 1 ( r ) , p l s 3 ( r ) ) = u = 1 # U θ ( r ^ ( u ) · | ( θ 1 ( u ) ) q ( θ 3 ( u ) ) q | ρ q ) λ + v = 1 # V ϕ ( r ˜ ( v ) · | ( ϕ 1 ( v ) ) q ( ϕ 3 ( v ) ) q | ρ q ) λ λ
= u = 1 # U θ ( r ^ ( u ) · | ( θ 1 ( u ) ) q ( θ 2 ( u ) ) q + ( θ 2 ( u ) ) q ( θ 3 ( u ) ) q | ρ q ) λ + v = 1 # V ϕ ( r ˜ ( v ) · | ( ϕ 1 ( v ) ) q ( ϕ 2 ( v ) ) q + ( ϕ 2 ( v ) ) q ( ϕ 3 ( v ) ) q | ρ q ) λ λ u = 1 # U θ ( r ^ ( u ) · | ( θ 1 ( u ) ) q ( θ 2 ( u ) ) q | + | ( θ 2 ( u ) ) q ( θ 3 ( u ) ) q | ρ q ) λ + v = 1 # V ϕ ( r ˜ ( v ) · | ( ϕ 1 ( v ) ) q ( ϕ 2 ( v ) ) q | + | ( ϕ 2 ( v ) ) q ( ϕ 3 ( v ) ) q | ρ q ) λ λ u = 1 # U θ ( r ^ ( u ) · | ( θ 1 ( u ) ) q ( θ 2 ( u ) ) q | ρ q ) λ + v = 1 # L ϕ ( r ˜ ( v ) · | ( ϕ 1 ( v ) ) q ( ϕ 2 ( v ) ) q | ρ q ) λ λ + u = 1 # U θ ( r ^ ( u ) · | ( θ 2 ( u ) ) q ( θ 3 ( u ) ) q | ρ q ) λ + v = 1 # V ϕ ( r ˜ ( v ) · | ( ϕ 2 ( v ) ) q ( ϕ 3 ( v ) ) q | ρ q ) λ λ = D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) + D d g d ( p l s 2 ( r ) , p l s 3 ( r ) ) .
Therefore D d g d ( p l s 1 ( r ) , p l s 3 ( r ) ) D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) + D d g d ( p l s 2 ( r ) , p l s 3 ( r ) ) is proved.
Example 5.
Assume S ¯ = { s α | 0 α 6 } is a LTS, p l s 1 ( r ) = { s 2 ( 0.2 ) , s 2 ( 0.6 ) , s 2 ( 0.2 ) } , { s 3 ( 0.4 ) , s 4 ( 0.1 ) , s 4 ( 0.3 ) , s 5 ( 0.2 ) } and p l s 2 ( r ) = { s 1 ( 0.2 ) , s 2 ( 0.6 ) , s 3 ( 0.2 ) } , { s 5 ( 0.4 ) , s 5 ( 0.1 ) , s 6 ( 0.3 ) , s 6 ( 0.2 ) } are two adjusted PLQRONs. If q = 2 , the calculation result of D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) is given as follows:
D d g d ( p l s 1 ( r ) , p l s 2 ( r ) ) = ( ( 0.2 | ( 2 2 1 2 ) | 6 2 ) λ + ( 0.6 | ( 2 2 2 2 ) | 6 2 ) λ + ( 0.2 | ( 2 2 3 2 ) | 2 6 2 ) λ +
( 0.4 | ( 3 2 5 2 ) | 6 2 ) λ + ( 0.1 | ( 4 2 5 2 ) | 6 2 ) λ + ( 0.3 | ( 4 2 6 2 ) | 6 2 ) λ + ( 0.2 | ( 5 2 6 2 ) | 6 2 ) λ ) 1 λ .
If λ = 1 , we can get D d h d = 0.475 . If λ = 2 , we can get D d e d = 0.2545 .

4. The Behavioral Decision Method

Since Hwang and Yoon [16] proposed the TOPSIS method, it has been widely applied in solving multiple criteria group decision making (MCGDM) problems. The traditional TOPSIS [17,18] method is an effective method in ranking the alternative. However, in practical decision making problems, the conditions in traditional TOPSIS method does not consider the behavior factors of decision makers. Thus, Yoon and Kim [23] introduced a behavioral TOPSIS method, which consider the behavioral tendency of decision makers and incorporate it into traditional TOPSIS method. However, in the uncertain decision making environment, how to represent the decision maker’s behavior factors is a difficult problem. In order to solve this problem, we deal with it as follows. The gain can be viewed as the earns from taking the alternative instead of the anti-ideal solution, and the loss can be considered as the decision maker’s pays from taking the alternative instead of the ideal solution, they can be expressed by the distance measure of related uncertain sets. So the behavioral TOPSIS method contains the loss aversion of decision maker in behavioral economics, and the decision maker can select the appropriate loss aversion ratio to express his/her choice preference. The method is proved to give a better choice than other methods (including traditional TOPSIS method) particularly in many fields, such as emergency decision making, selection for oil pipeline routes, etc., because the behavioral TOPSIS method precisely reflects the behavior tendency of decision maker.
Assume a group of experts e = { e 1 , e 2 , , e W } evaluate a series of alternatives Q = Q 1 , Q 2 , , Q m under the criteria C = C 1 , C 2 , , C n , let S ¯ [ 0 , ρ ] ( ρ > 2 g ) be a continuous LTS, the evaluation of experts are represented in the form of PLQRONs l s o ι ( r ) = H s o ι ( r ^ ) , G s o ι ( r ˜ ) , where H s o ι ( r ^ ) = { s θ o ι ( u ) ( r ^ ( u ) ) | s θ o ι ( u ) S ¯ [ 0 , ρ ] , r ^ ( u ) 0 , u = 1 , 2 , , U ; u = 1 U r ^ ( u ) 1 } and G s o ι ( r ˜ ) = { s ϕ o ι ( v ) ( r ˜ ( v ) ) | s θ o ι ( v ) S ¯ [ 0 , ρ ] , r ˜ ( v ) 0 , v = 1 , 2 , , V ; v = 1 V r ˜ ( v ) 1 } , o = 1 , 2 , , m ; ι = 1 , 2 , , n . The criteria’s weight vector are ω c ι = ( ω c 1 , ω c 2 , , ω c n ) T , where ι = 1 n ω c ι = 1 ( 0 ω c ι 1 ) , the experts’ weight vector are ω e w = ( ω e 1 , ω e 2 , , ω e W ) T ( w = 1 W ω e w = 1 , 0 ω e w 1 ) , then the wth expert’s decision matrix F ( w ) can be given as follows:
F ( w ) = P 11 w P 12 w P 1 n w P 21 w P 22 w P 2 n w P m 1 w P m 2 w P m n w ,
where P o ι w = ( P s o ι ( r ) ) w ( o = 1 , 2 , , m ; ι = 1 , 2 , , n ; w = 1 , 2 , , W ) are PLQRONs.
The steps of decision making are given as follows:
Step 1. Apply the adjustment method to adjust the probability distribution of PLQRONs, the adjusted decision matrix of the wth expert can be denoted as F ( w ) = ( P s o ι ( r ) ) m × n w .
Step 2. Apply the PLQROWA operator or PLQROWG operator to obtain the aggregated decision matrix F ( ) = ( P s o ι ( r ) ) m × n . Furthermore, normalize the aggregated decision matrix F ( ) based on the type of criteria. If it is a benefit-type criterion, there is no need to adjust; if it is a cost-type criterion, we need utilize the negation operator to normalize the decision matrix.
Step 3. Determine the ideal solution Q + = { Q 1 + , Q 2 + , , Q ι + } and the anti-ideal solution Q = Q 1 , Q 2 , , Q ι , respectively, where
Q ι + = { max o = 1 m { p l s o ι ( p ) } } , Q ι = { min o = 1 m { p l s o ι ( p ) } } .
For the criterion C ι , we apply the Formulas (1) and (2) to calculate Q + and Q .
Step 4. Utilize D d g d to calculate the distance between each alternative and Q + , Q , respectively. That is D o + and D o , where D o + = ι = 1 n ω c ι D d g d ( Q o , Q + ) , and D o = ι = 1 n ω c ι D d g d ( Q o , Q ) .
Step 5. Calculate the value function V o for alternative Q o ( o = 1 , 2 , , m ) .
V o = ( D o ) α + [ γ ( D o + ) β ] , ( 0 α 1 , 0 β 1 )
where γ is the decision maker’s loss aversion ratio, if γ > 1 , it implies the decision maker’s behavior is more sensitive to losses than gains; if γ = 1 , it implies the decision maker have neutral attitude towards losses or gains; if γ < 1 , it means the decision maker’s behavior is more sensitive to gains than losses; α and β reflects the decision maker’s risk aversion attitudes and the risk seeking attitudes in decision process, respectively.
Step 6. The greater value of V o , the better alternatives Q o will be, then we can obtain the rank of the alternatives.

5. Numerical Example

Here we present a practical multiple criteria group decision making example about investment decision (Beg et al. [27]), and the behavioral TOPSIS method is utilized to deal with this problem. The advantages of the behavioral TOPSIS method with PLQROSs are highlighted by the comparison analysis with the traditional TOPSIS method. Furthermore, we analyzed the stability and sensitivity of decision makers’ behavior.

5.1. Background

There are three investors e 1 , e 2 and e 3 , who want to invest the following three types of projects: real estate ( Q 1 ), the stock market ( Q 2 ) and treasury bills ( Q 3 ). In order to decide which project to invest, they consider from the following attributes: the risk factor ( C 1 ), the growth factor ( C 2 ), the return rate ( C 3 ) and the complexity of the document requirements ( C 4 ). The weight vector of investors is ( 0.3 , 0.5 , 0.2 ) T and the criteria’s weight vector is ( 0.4 , 0.2 , 0.3 , 0.1 ) T . The evaluation for the criterion C 1 is LTS S 1 = { s 0 : e x t r e m e l o w , s 1 : l o w , s 2 : s l i g h t l y l o w , s 3 : g e n e r a l l y , s 4 : s l i g h t l y h i g h , s 5 : h i g h , s 6 : e x t r e m e h i g h } , for the criteria C ι ( ι = 2 , 3 ) is S ι = { s 0 : e x t r e m e s l o w l y , s 1 : s l o w l y , s 2 : s l i g h t l y s l o w l y , s 3 : g e n e r a l l y , s 4 : s l i g h t l y f a s t , s 5 : f a s t , s 6 : e x t r e m e f a s t } ( ι = 2 , 3 ) ; and the evaluation for criterion C 4 is LTS S 4 = { s 0 : e x t r e m e e a s y , s 1 : e a s y , s 2 : s l i g h t l y e a s y , s 3 : g e n e r a l l y , s 4 : s l i g h t l y c o m p l e x i t y , s 5 : c o m p l e x i t y , s 6 : e x t r e m e c o m p l e x i t y } . Then, the decision matrices of each experts are expressed in Table 1, Table 2 and Table 3.
Where F ( w ) represents the wth investor’s evaluation information.

5.2. The Behavioral TOPSIS Method

Step 1. According to the adjustment method, we adjust the probability distribution of decision matrices F ( 1 ) , F ( 2 ) and F ( 3 ) , and the corresponding adjusted matrices F ( 1 ) , F ( 2 ) and F ( 3 ) are given in Table 4, Table 5 and Table 6.
Step 2. Firstly, we aggregate the adjusted decision matrices based on the PLQROWA operator. Then we normalize the aggregated matrix according to the type of criteria (criteria C 2 and C 3 belong to the benefit-type criteria, criterion C 1 , C 4 belongs to the cost-type criteria). If q = 2 , we can get the normalized decision matrix F ( ) in Table 7.
Step 3. According to Definition 8, the score function matrix can be obtained as follows:
A = 0.38 0.219 0.1478 0.2035 0.0939 0.1448 0.263 0.3269 0.2973 0.0841 0.0334 0.38 .
Furthermore, we can obtain the ideal solution as follows:
Q + = { { s 0 ( 0.5 ) , s 1.25 ( 0.1 ) , s 1.53 ( 0.2 ) , s 1.62 ( 0.2 ) } , { s 3.16 ( 0.3 ) , s 3.69 ( 0.2 ) , s 3.75 ( 0.2 ) , s 4.2 ( 0.3 ) } , { s 4.27 ( 0.4 ) , s 4.6 ( 0.3 ) , s 6 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.2 ) , s 1.41 ( 0.3 ) } , { s 3.76 ( 0.3 ) , s 4.6 ( 0.1 ) , s 4.78 ( 0.6 ) } , { s 0 ( 0.7 ) , s 1.62 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) } , { s 4.03 ( 0.3 ) , s 4.05 ( 0.4 ) , s 6 ( 0.3 ) } } ,
The anti-ideal solution is given as follows:
Q = { { s 0 ( 0.4 ) , s 0 ( 0.6 ) } , { s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) } , { s 2.92 ( 0.5 ) , s 3.6 ( 0.1 ) , s 3.95 ( 0.4 ) } , { s 0 ( 0.1 ) , s 0 ( 0.3 ) , s 0 ( 0.1 ) , s 2.63 ( 0.5 ) } , { s 0.9 ( 0.2 ) , s 1 ( 0.3 ) , s 1.85 ( 0.1 ) , s 2 ( 0.4 ) } , { s 2.26 ( 0.2 ) , s 2.77 ( 0.2 ) , s 3 ( 0.3 ) , s 3.27 ( 0.1 ) , s 3.78 ( 0.2 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) } , s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) } } .
Step 4. Calculate D ( Q o , Q + ) an D ( Q o , Q ) ( o = 1 , 2 , 3 ) , respectively.
If q = 2 , here we apply that the Euclidean distance measure D d e d , then D o + = ι = 1 4 ω c ι D d e d ( Q o , Q + ) and D o = ι = 1 4 ω c ι D d e d ( Q o , Q ) . So the separation measures between the alternative and the ideal/anti-ideal solution are obtained in Table 8.
Step 5. Calculate the value function V o about alternatives Q o ( o = 1 , 2 , 3 ) . Here the parameters α , β and γ are used to describe the decision maker’s behavior tendency. Here we assume γ = 2.25 , α = β = 0.88 [28], then we have V 1 = 0.2801 , V 2 = 0.1334 , V 3 = 0.4797 .
Step 6. According to the values of V o , we have Q 2 Q 1 Q 3 , so Q 2 is the best alternative.
Next, we consider the relationship between the decision conclusion and the change of parameter λ. We still take the PLQROWA operator as an example. Assume q = 2 , α = β = 0.88 , γ = 2.25 , and λ = 2 , 3 , 5 , 8 , 10 , 12 , respectively. The Figure 3 shows the corresponding ranking results (Table 9 shows the detailed calculation results). Obviously, the varies of value function V o is not sensitive to the parameter λ, which indicates that the parameter λ has little effect on the decision results.

5.3. Comparison Analysis with Existed Method

Here, the traditional TOPSIS method is used to compare with the behavioral TOPSIS method, the algorithm steps [29] are given as follow.
Step 1. Adjust the probability distribution of PLQRONs, the corresponding matrices F ( 1 ) , F ( 2 ) and F ( 3 ) are obtained.
Step 2. Apply the PLQROWA operator to aggregate the evaluation information, then we normalize the aggregated decision matrix; the result is same as Section 5.2.
Step 3. Similarly, we can obtain the positive ideal solution Q + as follows:
Q + = { { s 0 ( 0.5 ) , s 1.25 ( 0.1 ) , s 1.53 ( 0.2 ) , s 1.62 ( 0.2 ) } , { s 3.16 ( 0.3 ) , s 3.69 ( 0.2 ) , s 3.75 ( 0.2 ) , s 4.2 ( 0.3 ) } , { s 4.27 ( 0.4 ) , s 4.6 ( 0.3 ) , s 6 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.2 ) , s 1.41 ( 0.3 ) } , { s 3.76 ( 0.3 ) , s 4.6 ( 0.1 ) , s 4.78 ( 0.6 ) } , { s 0 ( 0.7 ) , s 1.62 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) } , { s 4.03 ( 0.3 ) , s 4.05 ( 0.4 ) , s 6 ( 0.3 ) } } .
The anti-ideal solution Q is also obtained as follows:
Q = { { s 0 ( 0.4 ) , s 0 ( 0.6 ) } , { s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) } , { s 2.92 ( 0.5 ) , s 3.6 ( 0.1 ) , s 3.95 ( 0.4 ) } , { s 0 ( 0.1 ) , s 0 ( 0.3 ) , s 0 ( 0.1 ) , s 2.63 ( 0.5 ) } , { s 0.9 ( 0.2 ) , s 1 ( 0.3 ) , s 1.85 ( 0.1 ) , s 2 ( 0.4 ) } , { s 2.26 ( 0.2 ) , s 2.77 ( 0.2 ) , s 3 ( 0.3 ) , s 3.27 ( 0.1 ) , s 3.78 ( 0.2 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) } , s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) } } .
Step 4. If q = 2 , we apply D d e d to calculate the distance of each alternative between Q + and Q , the results are obtained in Table 10.
Step 5. Calculate the closeness coefficient R o ( o = 1 , 2 , 3 ) ,
R o = D o D o + D o + .
By calculation, we get R 1 = 0.4419 , R 2 = 0.8271 and R 3 = 0.2145 . So the ranking order of the alternatives is Q 2 Q 1 Q 3 . The decision result is same as behavioral TOPSIS method, which shows the proposed method is effective.
Similarly, we consider the relationship between the decision result and the change of λ based on the traditional TOPSIS method. Here q = 2 , α = β = 0.88 and γ = 2.25 , the parameter λ = 2 , 3 , 5 , 8 , 10 , 12 , we apply the PLQROWA operator to calculate the closeness coefficient R o of each alternative, Figure 4 shows the ranking results (Table 11 shows the detailed calculation results). As can be seen from Figure 4, the closeness coefficient R o remains unchanged and the decision result is also tend to stable.

5.4. The Sensitivity of Decision Maker’s Behavior

Here, we make the analysis of the influence of loss aversion parameter γ, the risk preference parameter α and β in the proposed behavioral TOPSIS method.
Firstly, the impact of the loss aversion parameter γ in the value function is considered. We take the PLQROWA operator as an example, if q = 2 , α = β = 0.88 and λ = 2 , let γ = 0.5 , 0.8 , 1 , 2.25 , 5 , the ranking results of the value function V o are shown in Figure 5 (Table 12 shows the detailed calculation results). As can be seen from Figure 5, when γ 2.25 , the values of V 1 , V 2 and V 3 are less sensitive to the change of the loss aversion parameter γ; while γ > 2.25 , the values of V o ( o = 1 , 2 , 3 ) is changing obviously. In comparison, the loss aversion parameter γ has a significant influence on V 2 and V 3 . When γ > 2.25 , the values of V o ( o = 1 , 2 , 3 ) decrease sharply at the same time, which means if the parameter γ becomes larger, the loss aversion has a greater impact on the value function V o .
Next, we consider the influence of the risk preference parameters α and β in the value function, respectively. We take the PLQROWA operator as an example, suppose that q = 2 , β = 0.88 and γ = 2.25 , λ = 2 , let α = 0.1 , 0.3 , 0.5 , 0.8 , 1 , the results of value functions change with the parameter α are shown in Figure 6. It is easy to know that the values of the function V o ( o = 1 , 2 , 3 ) descend with the parameter α. We know Q 2 is always the best alternative from the Figure 6. If α > 0.88 , the values of v o ( o = 1 , 2 , 3 ) also tend to stable.
Furthermore, assume that q = 2 , α = 0.88 , γ = 2.25 and λ = 2 , let β = 0.1 , 0.3 , 0.5 , 0.8 , 1 , the results of value functions change with the parameter β are shown in Figure 7. Similarly, we know that the values of V o ( o = 1 , 2 , 3 ) increase with the parameter β, and the best alternative remains unchanged. If β > 0.88 , the values of V o ( o = 1 , 2 , 3 ) tend to stable. In conclusion, the change of value function V o is consistent with expert’s risk preference, if the expert is risk averse, the parameter α increases, he/she is more sensitive to the loss, and the overall value functions are decreasing. If the decision maker is risk appetite, when the parameter β increases, he/she becomes more sensitive to gains, the overall value functions are increasing.
According to the above comparison analyses, we can find that the proposed method has the following advantages. First, the behavioral TOPSIS method implements the decision maker’s choice by adopting the gain and loss. Second, it has been demonstrated that the traditional TOPSIS method is a special case of the proposed behavioral TOPSIS method [23], while the behavioral TOPSIS method involves a wider range of situations. In addition, there are three parameters (α, β and γ) in the value function of V o , the decision maker can choose the appropriate numerical value according to his/her risk preference and loss aversion, which makes the proposed behavioral TOPSIS method more flexible in practical application.

6. Conclusions

The main conclusions of the paper are given as follows:
(1)
The operations of PLQROS are proposed based on the adjusted PLQROS with the same probability. Then we present the PLQROWA operator, PLQROWG operator and the distance measures between the PLQROSs based on the proposed operational laws.
(2)
We develop the fuzzy behavior TOPSIS method to PLQROS, which consider the behavioral tendency in decision making process.
(3)
We utilize a numerical example to demonstrate the validity and feasibility of the fuzzy behavior TOPSIS method, and we prove the superiority of the method by comparison with the traditional TOPSIS method.
Next, we will apply the proposed method to deal with the multi-attribute decision making problems, such as the emergency decision, supplier selection and investment decision, etc.

Author Contributions

Conceptualization, D.L. and A.H.; methodology, D.L. and Y.L.; writing, D.L. and A.H.; supervision, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by National Natural Science Foundation of China (11501191) and Hunan Postgraduate Scientific Research and Innovation project (CX20190812).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

We thank the editor and anonymous reviewers for their helpful comments on an earlier draft of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  2. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—II. Inf. Sci. 1975, 8, 301–357. [Google Scholar] [CrossRef]
  3. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—III. Inf. Sci. 1975, 9, 43–80. [Google Scholar] [CrossRef]
  4. Rodriguez, R.M.; Luis, M.; Francisco, H. Hesitant fuzzy linguistic term sets for decision making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  5. Beg, I.; Rashid, T. TOPSIS for hesitant fuzzy linguistic term sets. Int. J. Intell. Syst. 2013, 28, 1162–1171. [Google Scholar] [CrossRef]
  6. Liao, H.; Xu, Z.; Herrera-Viedma, E.; Herrera, F. Hesitant fuzzy linguistic term set and its application in decision making: A state of the art survey. Int. J. Fuzzy Syst. 2018, 20, 2084–2110. [Google Scholar] [CrossRef]
  7. Liu, D.H.; Liu, Y.Y.; Chen, X.H. The new similarity measure and distance measure of a hesitant fuzzy linguistic term set based on a linguistic scale function. Symmetry 2018, 10, 367. [Google Scholar] [CrossRef] [Green Version]
  8. Hai, W.; Xu, Z.; Zeng, X.J. Hesitant fuzzy linguistic term sets for linguistic decision making: Current developments, issues and challenges. Inf. Fusion 2018, 43, 1–12. [Google Scholar]
  9. Wu, Z.; Xu, J.; Jiang, X.; Zhong, L. Two MAGDM models based on hesitant fuzzy linguistic term sets with possibility distributions: VIKOR and TOPSIS. Inf. Sci. 2019, 473, 101–120. [Google Scholar] [CrossRef]
  10. Kong, M.; Pei, Z.; Ren, F.; Hao, F. New operations on generalized hesitant fuzzy linguistic term Sets for linguistic decision making. Int. J. Fuzzy Syst. 2019, 21, 243–262. [Google Scholar] [CrossRef]
  11. Liu, D.H.; Chen, X.H.; Peng, D. Distance measures for hesitant fuzzy linguistic sets and their applications in multiple criteria decision making. Int. J. Fuzzy Syst. 2018, 20, 2111–2121. [Google Scholar] [CrossRef]
  12. Pang, Q.; Wang, H.; Xu, Z.S. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  13. Chen, Z.C. An approach to multiple attribute group decision making based on linguistic intuitionistic fuzzy numbers. Int. J. Comput. Intell. Syst. 2015, 8, 747–760. [Google Scholar] [CrossRef] [Green Version]
  14. Garg, H. Linguistic Pythagorean fuzzy sets and its applications in multiattribute decision-making process. Int. J. Intell. Syst. 2018, 33, 1234-–1263. [Google Scholar] [CrossRef]
  15. Yager, R.R. Generalized orthopair fuzzy sets. IEEE Trans. Fuzzy Syst. 2016, 25, 1222–1230. [Google Scholar] [CrossRef]
  16. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications a State-of-the-Art Survey; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  17. Lai, Y.J.; Liu, T.Y.; Hwang, C.L. TOPSIS for MODM. Eur. J. Oper. Res. 1994, 76, 486–500. [Google Scholar] [CrossRef]
  18. Chen, C.T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst. 2000, 114, 1–9. [Google Scholar] [CrossRef]
  19. Zyoud, S.H.; Fuchs-Hanusch, D. A bibliometric-based survey on AHP and TOPSIS techniques. Expert Syst. Appl. 2017, 78, 158–181. [Google Scholar] [CrossRef]
  20. Liu, D.H.; Chen, X.H.; Peng, D. Cosine distance measure between neutrosophic hesitant fuzzy linguistic sets and its application in multiple criteria decision making. Symmetry 2018, 10, 602. [Google Scholar] [CrossRef] [Green Version]
  21. Liu, D.H.; Chen, X.H.; Peng, D. Some cosine similarity measures and distance measures between q-rung orthopair fuzzy sets. Int. J. Intell. Syst. 2019, 34, 1572–1587. [Google Scholar] [CrossRef]
  22. Liu, H.C.; Wang, L.E.; Li, Z.; Hu, Y.P. Improving risk evaluation in FMEA with cloud model and hierarchical TOPSIS method. IEEE Trans. Fuzzy Syst. 2019, 27, 84–95. [Google Scholar] [CrossRef]
  23. Yoon, K.P.; Kim, W.K. The behavioral TOPSIS. Expert Syst. Appl. 2017, 89, 266–272. [Google Scholar] [CrossRef]
  24. Xu, Z.S. A method based on linguistic aggregation operators for group decision making with linguistic preference relations. Inf. Sci. 2004, 166, 19–30. [Google Scholar] [CrossRef]
  25. Liu, P.D.; Liu, W.Q. Multiple-attribute group decision-making based on power Bonferroni operators of linguistic q-rung orthopair fuzzy numbers. Int. J. Intell. Syst. 2019, 34, 652–689. [Google Scholar] [CrossRef]
  26. Wu, X.; Liao, H.; Xu, Z.; Hafezalkotob, A.; Herrera, F. Probabilistic linguistic MULTIMOORA: A multi-attributes decision making method based on the probabilistic linguistic expectation function and the improved Borda rule. IEEE Trans. Fuzzy Syst. 2018, 26, 3688–3702. [Google Scholar] [CrossRef]
  27. Beg, I.; Rashid, T. Hesitant intuitionistic fuzzy linguistic term sets. Notes Intuit. Fuzzy Sets 2014, 20, 53–64. [Google Scholar]
  28. Tversky, A.; Kahneman, D. Advances in prospect theory: Cumulative representation of uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
  29. Herrera, F.; Herrera-Viedma, E. Linguistic decision analysis: Steps for solving decision problems under linguistic information. Fuzzy Sets Syst. 2000, 115, 67–82. [Google Scholar] [CrossRef]
Figure 1. The adjust process of PLQRONs p l s 1 ( r ) and p l s 2 ( r ) .
Figure 1. The adjust process of PLQRONs p l s 1 ( r ) and p l s 2 ( r ) .
Symmetry 13 00891 g001
Figure 2. The adjust process of PLQRONs p l s 1 ( r ) and p l s 2 ( r ) .
Figure 2. The adjust process of PLQRONs p l s 1 ( r ) and p l s 2 ( r ) .
Symmetry 13 00891 g002
Figure 3. The results of change λ in behavioral TOPSIS method.
Figure 3. The results of change λ in behavioral TOPSIS method.
Symmetry 13 00891 g003
Figure 4. The results of change the parameter λ in traditional TOPSIS method.
Figure 4. The results of change the parameter λ in traditional TOPSIS method.
Symmetry 13 00891 g004
Figure 5. The results of changed loss aversion parameter γ.
Figure 5. The results of changed loss aversion parameter γ.
Symmetry 13 00891 g005
Figure 6. The results of change the risk preference parameter α.
Figure 6. The results of change the risk preference parameter α.
Symmetry 13 00891 g006
Figure 7. The results of change the risk preference parameter β.
Figure 7. The results of change the risk preference parameter β.
Symmetry 13 00891 g007
Table 1. The decision matrix F ( 1 ) .
Table 1. The decision matrix F ( 1 ) .
C 1 C 2
Q 1 { s 6 ( 1 ) } , { s 0 ( 1 ) } { s 4 ( 0.4 ) , s 5 ( 0.6 ) } , { s 0 ( 0.7 ) , s 1 ( 0.3 ) }
Q 2 { s 4 ( 0.7 ) , s 5 ( 0.3 ) } , { s 1 ( 0.6 ) , s 2 ( 0.4 ) } { s 3 ( 0.3 ) , s 4 ( 0.7 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
Q 3 { s 5 ( 0.6 ) , s 6 ( 0.4 ) } , { s 0 ( 0.5 ) , s 1 ( 0.5 ) } { s 1 ( 0.5 ) , s 2 ( 0.5 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) }
C 3 C 4
{ s 0 ( 0.3 ) , s 1 ( 0.7 ) } , { s 2 ( 0.4 ) , s 3 ( 0.6 ) } { s 1 ( 0.7 ) , s 2 ( 0.3 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) }
{ s 3 ( 0.3 ) , s 4 ( 0.7 ) } , { s 0 ( 0.7 ) , s 1 ( 0.3 ) } { s 4 ( 0.3 ) , s 5 ( 0.7 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
{ s 1 ( 0.5 ) , s 2 ( 0.5 ) } , { s 3 ( 0.7 ) , s 4 ( 0.3 ) } { s 4 ( 0.3 ) , s 5 ( 0.7 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
Table 2. The decision matrix F ( 2 ) .
Table 2. The decision matrix F ( 2 ) .
C 1 C 2
Q 1 { s 0 ( 0.3 ) , s 1 ( 0.7 ) } , { s 2 ( 0.4 ) , s 3 ( 0.6 ) } { s 4 ( 0.7 ) , s 5 ( 0.3 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
Q 2 { s 3 ( 0.3 ) , s 4 ( 0.7 ) } , { s 0 ( 0.5 ) , s 1 ( 0.5 ) } { s 1 ( 0.5 ) , s 2 ( 0.5 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) }
Q 3 { s 5 ( 0.6 ) , s 6 ( 0.4 ) } , { s 0 ( 1 ) } { s 3 ( 0.5 ) , s 4 ( 0.5 ) } , { s 1 ( 0.1 ) , s 2 ( 0.3 ) , s 3 ( 0.6 ) }
C 3 C 4
{ s 4 ( 0.5 ) , s 5 ( 0.5 ) } , { s 0 ( 0.7 ) , s 1 ( 0.3 ) } { s 5 ( 0.7 ) , s 6 ( 0.3 ) } , { s 0 ( 1 ) }
{ s 4 ( 0.3 ) , s 5 ( 0.7 ) } , { s 1 ( 0.7 ) , s 2 ( 0.3 ) } { s 5 ( 0.5 ) , s 6 ( 0.5 ) } , { s 0 ( 1 ) }
{ s 1 ( 0.5 ) , s 2 ( 0.5 ) } , { s 2 ( 0.2 ) , s 3 ( 0.6 ) , s 4 ( 0.2 ) } { s 4 ( 0.5 ) , s 5 ( 0.5 ) } , { s 0 ( 1 ) }
Table 3. The decision matrix F ( 3 ) .
Table 3. The decision matrix F ( 3 ) .
C 1 C 2
Q 1 { s 4 ( 0.5 ) , s 5 ( 0.5 ) } , { s 0 ( 0.4 ) , s 1 ( 0.6 ) } { s 5 ( 0.7 ) , s 6 ( 0.3 ) } , { s 0 ( 1 ) }
Q 2 { s 1 ( 0.5 ) , s 2 ( 0.5 ) } , { s 2 ( 0.5 ) , s 3 ( 0.3 ) , s 4 ( 0.2 ) } { s 5 ( 0.7 ) , s 6 ( 0.3 ) } , { s 0 ( 1 ) }
Q 3 { s 4 ( 0.2 ) , s 5 ( 0.8 ) } , { s 1 ( 0.7 ) , s 2 ( 0.3 ) } { s 4 ( 0.6 ) , s 5 ( 0.4 ) } , { s 0 ( 0.5 ) , s 1 ( 0.5 ) }
C 3 C 4
{ s 2 ( 0.5 ) , s 3 ( 0.5 ) , } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) } { s 0 ( 0.3 ) , s 1 ( 0.7 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) }
{ s 4 ( 0.4 ) , s 5 ( 0.6 ) } , { s 1 ( 0.7 ) , s 2 ( 0.3 ) } { s 3 ( 0.3 ) , s 4 ( 0.7 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
{ s 0 ( 0.2 ) , s 1 ( 0.4 ) , s 2 ( 0.4 ) } , { s 2 ( 0.4 ) , s 3 ( 0.6 ) } { s 6 ( 1 ) , } , { s 0 ( 1 ) }
Table 4. The adjusted decision matrix F ( 1 ) .
Table 4. The adjusted decision matrix F ( 1 ) .
C 1 C 2
Q 1 { s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) } , { s 0 ( 0.4 ) , s 0 ( 0.6 ) } { s 4 ( 0.4 ) , s 5 ( 0.3 ) , s 5 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.2 ) , s 1 ( 0.3 ) }
Q 2 { s 4 ( 0.3 ) , s 4 ( 0.2 ) , s 4 ( 0.2 ) , s 5 ( 0.3 ) } , { s 1 ( 0.5 ) , s 1 ( 0.1 ) , s 2 ( 0.2 ) , s 2 ( 0.2 ) } { s 3 ( 0.3 ) , s 4 ( 0.2 ) , s 4 ( 0.2 ) , s 4 ( 0.3 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
Q 3 { s 5 ( 0.2 ) , s 5 ( 0.4 ) , s 6 ( 0.4 ) } , { s 0 ( 0.5 ) , s 1 ( 0.2 ) , s 1 ( 0.3 ) } { s 1 ( 0.5 ) , s 2 ( 0.1 ) , s 2 ( 0.4 ) } , { s 3 ( 0.1 ) , s 3 ( 0.3 ) , s 3 ( 0.1 ) , s 4 ( 0.5 ) }
C 3 C 4
{ s 0 ( 0.3 ) , s 1 ( 0.2 ) , s 1 ( 0.5 ) } , { s 2 ( 0.4 ) , s 3 ( 0.1 ) , s 3 ( 0.2 ) , s 3 ( 0.3 ) } { s 1 ( 0.3 ) , s 1 ( 0.4 ) , s 2 ( 0.3 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) }
{ s 3 ( 0.3 ) , s 4 ( 0.1 ) , s 4 ( 0.6 ) } , { s 0 ( 0.7 ) , s 0 ( 0.3 ) } { s 4 ( 0.3 ) , s 5 ( 0.2 ) , s 5 ( 0.5 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
{ s 1 ( 0.2 ) , s 1 ( 0.3 ) , s 2 ( 0.1 ) , s 2 ( 0.4 ) } , { s 3 ( 0.2 ) , s 3 ( 0.2 ) , s 3 ( 0.3 ) , s 4 ( 0.1 ) , s 4 ( 0.2 ) } { s 4 ( 0.3 ) , s 5 ( 0.2 ) , s 5 ( 0.5 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
Table 5. The adjusted decision matrix F ( 2 ) .
Table 5. The adjusted decision matrix F ( 2 ) .
C 1 C 2
Q 1 { s 0 ( 0.3 ) , s 1 ( 0.2 ) , s 1 ( 0.5 ) } , { s 2 ( 0.4 ) , s 3 ( 0.6 ) } { s 4 ( 0.4 ) , s 4 ( 0.3 ) , s 5 ( 0.3 ) } , { s 1 ( 0.5 ) , s 2 ( 0.2 ) , s 2 ( 0.3 ) }
Q 2 { s 3 ( 0.3 ) , s 4 ( 0.2 ) , s 4 ( 0.2 ) , s 4 ( 0.3 ) } , { s 0 ( 0.5 ) , s 1 ( 0.1 ) , s 1 ( 0.2 ) , s 1 ( 0.2 ) } { s 1 ( 0.3 ) , s 1 ( 0.2 ) , s 2 ( 0.2 ) , s 2 ( 0.3 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) }
Q 3 { s 5 ( 0.2 ) , s 5 ( 0.4 ) , s 6 ( 0.4 ) } , { s 0 ( 0.5 ) , s 0 ( 0.2 ) , s 0 ( 0.3 ) } { s 3 ( 0.5 ) , s 4 ( 0.1 ) , s 4 ( 0.4 ) } , { s 1 ( 0.1 ) , s 2 ( 0.3 ) , s 3 ( 0.1 ) , s 3 ( 0.5 ) }
C 3 C 4
{ s 4 ( 0.3 ) , s 4 ( 0.2 ) , s 5 ( 0.5 ) } , { s 0 ( 0.4 ) , s 0 ( 0.1 ) , s 0 ( 0.2 ) , s 1 ( 0.3 ) } { s 5 ( 0.3 ) , s 5 ( 0.4 ) , s 6 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) }
{ s 4 ( 0.3 ) , s 5 ( 0.1 ) , s 5 ( 0.6 ) } , { s 1 ( 0.7 ) , s 2 ( 0.3 ) } { s 5 ( 0.3 ) , s 5 ( 0.2 ) , s 6 ( 0.5 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) }
{ s 1 ( 0.2 ) , s 1 ( 0.3 ) , s 2 ( 0.1 ) , s 2 ( 0.4 ) } , { s 2 ( 0.4 ) , s 3 ( 0.2 ) , s 3 ( 0.3 ) , s 3 ( 0.1 ) , s 4 ( 0.2 ) } { s 4 ( 0.3 ) , s 4 ( 0.2 ) , s 5 ( 0.5 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) }
Table 6. The adjusted decision matrix F ( 3 ) .
Table 6. The adjusted decision matrix F ( 3 ) .
C 1 C 2
Q 1 { s 2 ( 0.3 ) , s 4 ( 0.2 ) , s 5 ( 0.5 ) } , { s 0 ( 0.4 ) , s 1 ( 0.6 ) } { s 5 ( 0.4 ) , s 5 ( 0.3 ) , s 6 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.2 ) , s 1 ( 0.3 ) }
Q 2 { s 1 ( 0.3 ) , s 1 ( 0.2 ) , s 2 ( 0.2 ) , s 2 ( 0.3 ) } , { s 2 ( 0.5 ) , s 3 ( 0.1 ) , s 3 ( 0.2 ) , s 4 ( 0.2 ) } { s 5 ( 0.3 ) , s 5 ( 0.2 ) , s 5 ( 0.2 ) , s 6 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) }
Q 3 { s 4 ( 0.2 ) , s 5 ( 0.4 ) , s 5 ( 0.4 ) } , { s 1 ( 0.5 ) , s 1 ( 0.2 ) , s 2 ( 0.3 ) } { s 4 ( 0.5 ) , s 4 ( 0.1 ) , s 5 ( 0.4 ) } , { s 0 ( 0.1 ) , s 0 ( 0.3 ) , s 0 ( 0.1 ) , s 1 ( 0.5 ) }
C 3 C 4
{ s 2 ( 0.3 ) , s 2 ( 0.2 ) , s 3 ( 0.5 ) } , { s 3 ( 0.4 ) , s 3 ( 0.1 ) , s 4 ( 0.2 ) , s 4 ( 0.3 ) } { s 0 ( 0.3 ) , s 1 ( 0.4 ) , s 1 ( 0.3 ) } , { s 3 ( 0.5 ) , s 4 ( 0.5 ) }
{ s 4 ( 0.3 ) , s 4 ( 0.1 ) , s 5 ( 0.6 ) } , { s 1 ( 0.7 ) , s 2 ( 0.3 ) } { s 3 ( 0.3 ) , s 4 ( 0.2 ) , s 4 ( 0.5 ) } , { s 1 ( 0.5 ) , s 2 ( 0.5 ) }
{ s 0 ( 0.2 ) , s 1 ( 0.3 ) , s 1 ( 0.1 ) , s 2 ( 0.4 ) } , { s 2 ( 0.4 ) , s 2 ( 0.2 ) , s 3 ( 0.3 ) , s 3 ( 0.1 ) , s 3 ( 0.2 ) } { s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) }
Table 7. The aggregate decision matrix F ( ) .
Table 7. The aggregate decision matrix F ( ) .
C 1
Q 1 { s 0 ( 0.4 ) , s 0 ( 0.6 ) } , { s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) }
Q 2 { s 0 ( 0.5 ) , s 1.25 ( 0.1 ) , s 1.53 ( 0.2 ) , s 1.62 ( 0.2 ) } , { s 3.16 ( 0.3 ) , s 3.69 ( 0.2 ) , s 3.75 ( 0.2 ) , s 4.2 ( 0.3 ) }
Q 3 { s 0 ( 0.5 ) , s 0 ( 0.2 ) , s 0 ( 0.3 ) } , { s 4.86 ( 0.2 ) , s 5 ( 0.4 ) , s 6 ( 0.4 ) }
C 2
{ s 4.27 ( 0.4 ) , s 4.6 ( 0.3 ) , s 6 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.2 ) , s 1.41 ( 0.3 ) }
{ s 3.21 ( 0.3 ) , s 3.54 ( 0.2 ) , s 3.68 ( 0.2 ) , s 6 ( 0.3 ) } , { s 0 ( 0.5 ) , s 0 ( 0.5 ) }
{ s 2.92 ( 0.5 ) , s 3.6 ( 0.1 ) , s 3.95 ( 0.4 ) } , { s 0 ( 0.1 ) , s 0 ( 0.3 ) , s 0 ( 0.1 ) , s 2.63 ( 0.5 ) }
C 3
{ s 3.13 ( 0.3 ) , s 3.16 ( 0.2 ) , s 4.17 ( 0.5 ) } , { s 0 ( 0.4 ) , s 0 ( 0.1 ) , s 0 ( 0.2 ) , s 1.83 ( 0.3 ) }
{ s 3.76 ( 0.3 ) , s 4.6 ( 0.1 ) , s 4.78 ( 0.6 ) } , { s 0 ( 0.7 ) , s 1.62 ( 0.3 ) }
{ s 0.9 ( 0.2 ) , s 1 ( 0.3 ) , s 1.85 ( 0.1 ) , s 2 ( 0.4 ) } , { s 2.26 ( 0.2 ) , s 2.77 ( 0.2 ) , s 3 ( 0.3 ) , s 3.27 ( 0.1 ) , s 3.78 ( 0.2 ) }
C 4
{ s 0 ( 0.5 ) , s 0 ( 0.5 ) } , { s 4.03 ( 0.3 ) , s 4.05 ( 0.4 ) , s 6 ( 0.3 ) }
{ s 0 ( 0.5 ) , s 0 ( 0.5 ) } , { s 4.5 ( 0.3 ) , s 4.86 ( 0.2 ) , s 6 ( 0.5 ) }
{ s 0 ( 0.5 ) , s 0 ( 0.5 ) } , s 6 ( 0.3 ) , s 6 ( 0.2 ) , s 6 ( 0.5 ) }
Table 8. The separation measures for each alternative.
Table 8. The separation measures for each alternative.
Q 1 Q 2 Q 3
D o + 0.15610.05490.2055
D o 0.12360.26260.0561
Table 9. The detailed results of parameter λ.
Table 9. The detailed results of parameter λ.
λ = 2 λ = 3 λ = 5 λ = 8 λ = 10 λ = 12
V 1 −0.2801−0.2335−0.2056−0.197−0.1957−0.1953
V 2 0.13340.10780.09590.09290.09250.0923
V 3 −0.4797−0.4052−0.3806−0.3768−0.3763−0.3761
Table 10. The separation measures for each alternative.
Table 10. The separation measures for each alternative.
Q 1 Q 2 Q 3
D o + 0.15610.05490.2055
D o 0.12360.26260.0561
Table 11. The detailed results with the parameter λ.
Table 11. The detailed results with the parameter λ.
λ = 2 λ = 3 λ = 5 λ = 8 λ = 10 λ = 12
R 1 0.44190.44720.45820.46350.46440.4647
R 2 0.82710.82020.81560.81420.8140.814
R 3 0.21450.23060.23470.23490.2350.2351
Table 12. Preference ranking under various loss aversion parameter γ.
Table 12. Preference ranking under various loss aversion parameter γ.
Distance MeasureBehavioral TOPSIS
γ = 0.5 γ = 0.8 γ = 1 γ = 1.5 γ = 2.25 γ = 5
D i + D i V i Rank V i Rank V i Rank V i Rank V i Rank V i Rank
Q 1 0.15610.12360.061320.00282−0.03622−0.13382−0.28012−0.81672
Q 2 0.05490.26260.269510.246110.230610.191710.13341−0.08051
Q 3 0.20550.0561−0.04493−0.11953−0.16913−0.29343−0.47973−1.16293
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, D.; Huang, A.; Liu, Y.; Liu, Z. An Extension TOPSIS Method Based on the Decision Maker’s Risk Attitude and the Adjusted Probabilistic Fuzzy Set. Symmetry 2021, 13, 891. https://doi.org/10.3390/sym13050891

AMA Style

Liu D, Huang A, Liu Y, Liu Z. An Extension TOPSIS Method Based on the Decision Maker’s Risk Attitude and the Adjusted Probabilistic Fuzzy Set. Symmetry. 2021; 13(5):891. https://doi.org/10.3390/sym13050891

Chicago/Turabian Style

Liu, Donghai, An Huang, Yuanyuan Liu, and Zaiming Liu. 2021. "An Extension TOPSIS Method Based on the Decision Maker’s Risk Attitude and the Adjusted Probabilistic Fuzzy Set" Symmetry 13, no. 5: 891. https://doi.org/10.3390/sym13050891

APA Style

Liu, D., Huang, A., Liu, Y., & Liu, Z. (2021). An Extension TOPSIS Method Based on the Decision Maker’s Risk Attitude and the Adjusted Probabilistic Fuzzy Set. Symmetry, 13(5), 891. https://doi.org/10.3390/sym13050891

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop