Next Article in Journal
Applications of Neutrosophic Bipolar Fuzzy Sets in HOPE Foundation for Planning to Build a Children Hospital with Different Types of Similarity Measures
Next Article in Special Issue
A Multi-Level Privacy-Preserving Approach to Hierarchical Data Based on Fuzzy Set Theory
Previous Article in Journal
Barycenter Theorem in Phase Characteristics of Symmetric and Asymmetric Windows
Previous Article in Special Issue
Picture Hesitant Fuzzy Set and Its Application to Multiple Criteria Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiple-Attribute Decision-Making Method Using Similarity Measures of Hesitant Linguistic Neutrosophic Numbers Regarding Least Common Multiple Cardinality

Department of Electrical Engineering and Automation, Shaoxing University, 508 Huancheng West Road, Shaoxing 312000, Zhejiang, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(8), 330; https://doi.org/10.3390/sym10080330
Submission received: 10 July 2018 / Revised: 5 August 2018 / Accepted: 7 August 2018 / Published: 9 August 2018
(This article belongs to the Special Issue Fuzzy Techniques for Decision Making 2018)

Abstract

:
Linguistic neutrosophic numbers (LNNs) are a powerful tool for describing fuzzy information with three independent linguistic variables (LVs), which express the degrees of truth, uncertainty, and falsity, respectively. However, existing LNNs cannot depict the hesitancy of the decision-maker (DM). To solve this issue, this paper first defines a hesitant linguistic neutrosophic number (HLNN), which consists of a few LNNs regarding an evaluated object due to DMs’ hesitancy to represent their hesitant and uncertain information in the decision-making process. Then, based on the least common multiple cardinality (LCMC), we present generalized distance and similarity measures of HLNNs, and then develop a similarity measure-based multiple-attribute decision-making (MADM) method to handle the MADM problem in the HLNN setting. Finally, the feasibility of the proposed approach is verified by an investment decision case.

1. Introduction

In the real world, the linguistic expression is well-suited for the thinking and expressing patterns of human beings. Due to the vagueness of languages and the complexity of decision-making environments, the linguistic fuzzy theory has been well developed in the past decades and shows irreplaceable advantages in the fuzzy decision-making field. Linguistic variables (LVs) were defined for fuzzy reasoning and decision-making [1,2,3,4]. Linguistic uncertain variables [5,6] (interval-valued linguistic variables) were then defined to depict uncertain linguistic information in decision-making problems [7,8]. After that, a linguistic intuitionistic fuzzy number (LIFN) [9], which contains two independent LVs to describe the degrees of truth and falsity, respectively, was presented to handle the uncertainty and incompleteness in linguistic decision-making environments [10]. Furthermore, with the wide application of the neutrosophic theory in decision-making [11,12,13], Fang and Ye [14] proposed a linguistic neutrosophic number (LNN) by adding a new LV to the LIFN for representing the indeterminacy degree to do with the indeterminate and inconsistent linguistic information [15]. Although there exist some research works on LNNs [14,15], existing LNNs cannot depict the hesitancy of decision-makers (DMs) in the linguistic assessment of alternatives.
Concerning the handling of the human hesitant cognition in decision-making environments, many works have been published so far. Torra and Narukawa [16] and Torra [17] originally introduced hesitant fuzzy sets (HFSs) to express the hesitancy by allowing the membership to contain several possible values. Then, for linguistic decision-making problems, the expression of a hesitant fuzzy linguistic set (HFLS) [18] was obtained based on combining a linguistic term (LT) set with a HFS so as to satisfy the hesitant linguistic evaluation requirements [19,20] of DMs. After that, an interval-valued HFLS [21] was presented as an extension form by combining an interval-valued LT set with a HFS. Recently, Ye [22] proposed the hesitant neutrosophic linguistic number (HNLN) to carry out hesitant decision-making problems with the neutrosophic linguistic number that contains partial determinacy and partial indeterminacy. However, there is no definition or decision-making method for the hesitant sets of LNNs in the existing literature. Additionally, in the hesitant linguistic expressions of DMs, the components between two hesitant sets generally have difference in their length sizes, and thus it is difficult to directly perform measure calculations between hesitant sets. Thus, several researchers have proposed some extension methods to extend the shorter items in the two hesitant sets by adding the minimum values, maximum values, or any values [23,24] to reach their identical length. However, these extension methods depend too much on the subjective preferences and interests of the DMs. To solve this problem, we have already introduced the least common multiple cardinality (LCMC) to extend the hesitant fuzzy elements in our previous research works [22,25], which become more objective for the decision-making calculation of HFSs.
As aforementioned, there is a gap of hesitant LNNs in existing studies. For instance, suppose that we hesitate between two single-valued LNNs, <h7, h3, h4> and <h5, h3, h1>, from the given LT set H = {hs|s [0, 8]} regarding an evaluated object. However, it is difficult to express the hesitation information and the LNN information of the DMs simultaneously by a unique LNN or a unique HFS. Therefore, for the purposes of satisfying the demand of hesitant decision-making with LNNs and ensuring the objectivity of the measure calculation, this paper aims to (i) define the concept of HLNNs by combining HFSs with LNNs, (ii) present the LCMC-based generalized distance and similarity measures of HLNNs for more objective measure calculation of HLNN information, and (iii) to propose a novel multiple-attribute decision-making (MADM) method based on the proposed LCMC-based similarity measure in the HLNN setting.
In order to do so, Section 2 briefly reviews LNNs. Section 3 defines a HLNN and a HLNN set. Then, in Section 4, the LCMC-based generalized distance and similarity measures of HLNNs are presented. In Section 5, a new MADM method was developed by using the proposed similarity measure of HLNNs. In Section 6, the feasibility of the proposed approach is demonstrated by an investment case. The conclusions and future research of HLNNs are discussed in the last section.

2. Linguistic Neutrosophic Numbers (LNNs)

Fang and Ye [14] originally presented the following definition of the LNN:
Definition 1
([14]). Let H = {h0, h1, ..., hτ} be a LT set, where τ + 1 is an odd cardinality. A LNN can be defined as ϑ = < hT, hU, hF> for hT, hU, hF H and T, U, F [0, τ], where hT, hU, hF represent the degrees of truth, indeterminacy, and falsity, respectively.
For the comparison of LNNs, the score and accuracy functions of LNNs are defined as follows [14]:
Definition 2
([14]). Let ϑ = <hT, hU, hF> be a LNN in H. Then its score function can be given by:
S ( ϑ ) = ( 2 τ + T U F ) / 3 τ for S ( ϑ ) [ 0 , 1 ] ,
and its accuracy function can be expressed as
V ( ϑ ) = ( T F ) / τ for V ( ϑ ) [ 1 , 1 ] .
Definition 3
([14]). Let ϑ α = < h T α , h U α , h F α > and ϑ β = < h T β , h U β , h F β > be two LNNs in H. There exist the following relations:
(1)
If S(ϑα) < S(ϑβ), then ϑα < ϑβ;
(2)
If S(ϑα) > S(ϑβ), then ϑα > ϑβ;
(3)
If S(ϑα) = S(ϑβ) and V(ϑα) < V(ϑβ), then ϑα < ϑβ;
(4)
If S(ϑα) = S(ϑβ) and V(ϑα) > V(ϑβ), then ϑα > ϑβ;
(5)
If S(ϑα) = S(ϑβ) and V(ϑα) = V(ϑβ), then ϑα = ϑβ.

3. Hesitant Linguistic Neutrosophic Numbers (HLNNs) and HLNN Set

Torra and Narukawa [16] and Torra [17] first defined the HFS as follows:
Definition 4
([16,17]). Assume S is a universe set, then a HFS N on S can be given by
N = { < s , E ( s ) > | s S } ,
where E(s) is a hesitant component of N containing a set of some values in [0, 1], which represents all possible membership degrees of s.
By integrating HFS with LNN, we define a HLNN set as follows:
Definition 5.
Set a universe of discourse S = {s1, s2, …, sq} and a finite LT set H = {h0, h1, …, hτ}, and then a HLNN set Nl on S can be expressed as
N l = { < s j , E l ( s j ) > | s j S , j = 1 , 2 , , q }  
where El(sj) is a set of mj LNNs, denoted by a HLNN E l ( s j ) = { < h T j k , h U j k , h F j k > h T j k H , h U j k H , h F j k H , k = 1 , 2 , , m j } for sj S.

4. LCMC-Based Distance and Similarity Measures of HLNNs

In most situations, the cardinal numbers (the number of LNNs) of HLNNs evaluated for the same object are usually different. Thus, it is necessary to make the cardinal numbers of the two HLNNs the same to satisfy the distance and similarity measures between them.
We assume that p HLNNs on S = {s1, s2, , sq} are E l 1 ( s j ) , E l 2 ( s j ) , , E l p ( s j ) for sj S (j = 1, 2, ..., q). Then, the HLNNs E l i ( s j ) for i = 1, 2, …, p can be given by
E l 1 ( s j ) = { < h T 1 j 1 , h U 1 j 1 , h F 1 j 1 > , < h T 1 j 2 , h U 1 j 2 , h F 1 j 2 > , , < h T 1 j m 1 j , h U 1 j m 1 j , h F 1 j m 1 j > } , E l 2 ( s j ) = { < h T 2 j 1 , h U 2 j 1 , h F 2 j 1 > , < h T 2 j 2 , h U 2 j 2 , h F 2 j 2 > , , < h T 2 j m 2 j , h U 2 j m 2 j , h F 2 j m 2 j > } ,   ,   E l p ( s j ) = { < h T p j 1 , h U p j 1 , h F p j 1 > , < h T p j 2 , h U p j 2 , h F p j 2 > , , < h T p j m p j , h U p j m p j , h F p j m p j > } ,  
where mij is the cardinal number of E l i ( s j ) (i = 1, 2, …, p and j = 1, 2, …, q).
Provided that the LCMC of mij (i = 1, 2, ..., p and j = 1, 2, ..., q) is cj (j = 1, 2, …, q), by increasing the number of LNNs < h T i j k , h U i j k , h F i j k > (k = 1, 2, ..., mij) in E l i ( s j ) depending on cj (j = 1, 2, …, q), the extended HLNN E l i o ( s j ) (i = 1, 2, …, p and j = 1, 2, …, q) will be obtained by the extension forms:
E l 1 o ( s j ) = { < h T 1 j 1 , h U 1 j 1 , h F 1 j 1 > , R 1 j , < h T 1 j 2 , h U 1 j 2 , h F 1 j 2 > , R 1 j , , < h T 1 j m 1 j , h U 1 j m 1 j , h F 1 j m 1 j > , R 1 j c j } , E l 2 o ( s j ) = { < h T 2 j 1 , h U 2 j 1 , h F 2 j 1 > , R 2 j , < h T 2 j 2 , h U 2 j 2 , h F 2 j 2 > , R 2 j , , < h T 2 j m 2 j , h U 2 j m 2 j , h F 2 j m 2 j > , R 2 j c j } , ,   E l p o ( x j ) = { < h T p j 1 , h U p j 1 , h F p j 1 > , R p j , < h T p j 2 , h U p j 2 , h F p j 2 > , R p j , , < h T p j m p j , h U p j m p j , h F p j m p j > , R p j c j } ,
where Rij is the number of LNNs < h T i j k , h U i j k , h F i j k > (k = 1, 2, ..., mij) in E l i o ( x j ) (i = 1, 2, …, p and j = 1, 2, …, q), calculated by:
R i j = c j m i j .
Additionally, the elements ϑ i j σ ( k ) = < h T i j σ ( k ) , h U i j σ ( k ) , h F i j σ ( k ) > (k = 1, 2, …, cj) in E l i o ( x j ) are arranged in an ascending order, denoted as E l i o ( x j ) = { ϑ i j σ ( 1 ) , ϑ i j σ ( 2 ) , , ϑ i j σ ( c j ) } (i = 1, 2, …, p and j = 1, 2, …, q), where σ : ( 1 , 2 , , c j ) ( 1 , 2 , , c j ) is a permutation satisfying ϑ i j σ ( k ) ϑ i j σ ( k + 1 ) (k = 1, 2, …, cj).
Definition 6.
Let N l 1 = { E l 1 ( s 1 ) , E l 1 ( s 2 ) , , E l 1 ( s q ) } and N l 2 = { E l 2 ( s 1 ) , E l 2 ( s 2 ) , , E l 2 ( s q ) } be two HLNN sets on S = {s1, s2, ..., sq}, where E l 1 ( s j ) and E l 2 ( s j ) (j = 1, 2, , q) are HLNNsin a LT set H = {h0, h1, ..., hτ} for hj H. Let f(hj) = j/τ be a linguistic scale function. Then, the normalized generalized distance between N l 1 and N l 2 can be represented as:
d ( N l 1 , N l 2 ) = { 1 q j = 1 q [ 1 3 c j k = 1 c j ( | f ( h T 1 j σ ( k ) ) f ( h T 2 j σ ( k ) ) | ρ + | f ( h U 1 j σ ( k ) ) f ( h U 2 j σ ( k ) ) | ρ + | f ( h F 1 j σ ( k ) ) f ( h F 2 j σ ( k ) ) | ρ ) ] } 1 / ρ = { 1 q j = 1 q [ 1 3 c j τ ρ k = 1 c j ( | T 1 j σ ( k ) T 2 j σ ( k ) | ρ + | U 1 j σ ( k ) U 2 j σ ( k ) | ρ + | F 1 j σ ( k ) F 2 j σ ( k ) | ρ ) ] } 1 / ρ f o r   ρ > 0 .  
Obviously, d ( N l 1 , N l 2 ) degenerates to the normalized generalized distance of Hamming for ρ = 1 and to the normalized generalized distance of Euclidean for ρ = 2.
For the generalized distance d ( N l 1 , N l 2 ) , there is a proposition as follows:
Proposition 1.
For any two HLNN sets N l 1 = { E l 1 ( s 1 ) , E l 1 ( s 2 ) , , E l 1 ( s q ) } and N l 2 = { E l 2 ( s 1 ) , E l 2 ( s 2 ) , , E l 2 ( s q ) } , the generalized distance d ( N l 1 , N l 2 ) between N l 1 and N l 2 for ρ > 0 contains the following properties:
(HP1) 
0 d ( N l 1 , N l 2 ) 1 ;
(HP2) 
d ( N l 1 , N l 2 ) = 0 if and only if N l 1 = N l 2 ;
(HP3) 
d ( N l 1 , N l 2 ) = d ( N l 2 , N l 1 ) ;
(HP4) 
Let N l 3 = { E l 3 ( s 1 ) , E l 3 ( s 2 ) , , E l 3 ( s q ) } be a HLNN set, then d ( N l 1 , N l 2 ) d ( N l 1 , N l 3 ) and d ( N l 2 , N l 3 ) d ( N l 1 , N l 3 ) if N l 1 N l 2 N l 3 .
Proof. 
It is obvious that the properties (HP1)–(HP3) are satisfied for d ( N l 1 , N l 2 ) . Thus, we only need to prove the property (HP4).
Since there is N l 1 N l 2 N l 3 , there exists E l 1 0 ( s j ) E l 2 0 ( s j ) E l 3 0 ( s j ) for sj S (j = 1, 2, ..., q), which implies T 3 j σ ( k ) T 2 j σ ( k ) T 1 j σ ( k ) , U 3 j σ ( k ) U 2 j σ ( k ) U 1 j σ ( k ) , F 3 j σ ( k ) F 2 j σ ( k ) F 1 j σ ( k ) for k = 1, 2, ..., cj. It follows that
| T 1 j σ ( k ) T 2 j σ ( k ) | ρ | T 1 j σ ( k ) T 3 j σ ( k ) | ρ ,   | T 2 j σ ( k ) T 3 j σ ( k ) | ρ | T 1 j σ ( k ) T 3 j σ ( k ) | ρ , | U 1 j σ ( k ) U 2 j σ ( k ) | ρ | U 1 j σ ( k ) U 3 j σ ( k ) | ρ ,   | U 2 j σ ( k ) U 3 j σ ( k ) | ρ | U 1 j σ ( k ) U 3 j σ ( k ) | ρ , | F 1 j σ ( k ) F 2 j σ ( k ) | ρ | F 1 j σ ( k ) F 3 j σ ( k ) | ρ ,   | F 2 j σ ( k ) F 3 j σ ( k ) | ρ | F 1 j σ ( k ) F 3 j σ ( k ) | ρ .
Then there are the following inequalities:
| T 1 j σ ( k ) T 2 j σ ( k ) | ρ + | U 1 j σ ( k ) U 2 j σ ( k ) | ρ + | F 1 j σ ( k ) F 2 j σ ( k ) | ρ | T 1 j σ ( k ) T 3 j σ ( k ) | ρ + | U 1 j σ ( k ) U 3 j σ ( k ) | ρ + | F 1 j σ ( k ) F 3 j σ ( k ) | ρ , | T 2 j σ ( k ) T 3 j σ ( k ) | ρ + | U 2 j σ ( k ) U 3 j σ ( k ) | ρ + | F 2 j σ ( k ) F 3 j σ ( k ) | ρ | T 1 j σ ( k ) T 3 j σ ( k ) | ρ + | U 1 j σ ( k ) U 3 j σ ( k ) | ρ + | F 1 j σ ( k ) F 3 j σ ( k ) | ρ .
Thus, the following relations can be further obtained:
1 3 c j τ ρ [ k = 1 c j ( | ( T 1 j σ ( k ) ) ( T 2 j σ ( k ) ) | ρ + | ( U 1 j σ ( k ) ) ( U 2 j σ ( k ) ) | ρ + | ( F 1 j σ ( k ) ) ( F 2 j σ ( k ) ) | ρ ) ] 1 3 c j τ ρ [ k = 1 c j ( | ( T 1 j σ ( k ) ) ( T 3 j σ ( k ) ) | ρ + | ( U 1 j σ ( k ) ) ( U 3 j σ ( k ) ) | ρ + | ( F 1 j σ ( k ) ) ( F 3 j σ ( k ) ) | ρ ) ] , 1 3 c j τ ρ [ k = 1 c j ( | ( T 2 j σ ( k ) ) ( T 3 j σ ( k ) ) | ρ + | ( U 2 j σ ( k ) ) ( U 3 j σ ( k ) ) | ρ + | ( F 2 j σ ( k ) ) ( F 3 j σ ( k ) ) | ρ ) ] 1 3 c j τ ρ [ k = 1 c j ( | ( T 1 j σ ( k ) ) ( T 3 j σ ( k ) ) | ρ + | ( U 1 j σ ( k ) ) ( U 3 j σ ( k ) ) | ρ + | ( F 1 j σ ( k ) ) ( F 3 j σ ( k ) ) | ρ ) ] .
By Equation (4), there are d ( N l 1 , N l 2 ) d ( N l 1 , N l 3 ) and d ( N l 2 , N l 3 ) d ( N l 1 , N l 3 ) for ρ > 0. Therefore, the property (HP4) can hold. □
If we consider the weight wj of an element sj S with wj [0, 1] and j = 1 q w j = 1 , the generalized weighted distance between N l 1 and N l 2 is
d w ( N l 1 , N l 2 ) = { j = 1 q w j [ 1 3 c j k = 1 c j ( | f ( h T 1 j σ ( k ) ) f ( h T 2 j σ ( k ) ) | ρ + | f ( h U 1 j σ ( k ) ) f ( h U 2 j σ ( k ) ) | ρ + | f ( h F 1 j σ ( k ) ) f ( h F 2 j σ ( k ) ) | ρ ) ] } 1 / ρ = { j = 1 q w j [ 1 3 c j τ ρ k = 1 c j ( | T 1 j σ ( k ) T 2 j σ ( k ) | ρ + | U 1 j σ ( k ) U 2 j σ ( k ) | ρ + | F 1 j σ ( k ) F 2 j σ ( k ) | ρ ) ] } 1 / ρ f o r   ρ > 0 .  
Since the measures of similarity and distance are complementary with each other, the weighted measure of similarity between N l 1 and N l 2 can be represented by
S w ( N l 1 , N l 2 ) = 1 d w ( N l 1 , N l 2 ) = 1 { j = 1 q w j [ 1 3 c j τ ρ k = 1 c j ( | T 1 j σ ( k ) T 2 j σ ( k ) | ρ + | U 1 j σ ( k ) U 2 j σ ( k ) | ρ + | F 1 j σ ( k ) F 2 j σ ( k ) | ρ ) ] } 1 / ρ f o r   ρ > 0 .  
Similar to the properties (HP1)–(HP4) satisfied by the generalized distance measure in Proposition 1, the similarity measure S w ( N l 1 , N l 2 ) also has the proposition as follows:
Proposition 2.
The similarity measure S w ( N l 1 , N l 2 ) for ρ > 0 contains the following properties:
(HP1) 
0 S w ( N l 1 , N l 2 ) 1 ;
(HP2) 
S w ( N l 1 , N l 2 ) = 1 if and only if N l 1 = N l 2 ;
(HP3) 
S w ( N l 1 , N l 2 ) = S w ( N l 2 , N l 1 ) ;
(HP4) 
Let N l 3 be a HLNN set, then there are S w ( N l 1 , N l 2 ) S w ( N l 1 , N l 3 ) and S w ( N l 2 , N l 3 ) S w ( N l 1 , N l 3 ) if N l 1 N l 2 N l 3 .
Proof. 
It is clear that S w ( N l 1 , N l 2 ) satisfies the properties (SP1)–(SP3). Thus, we only prove the property (SP4) here.
According to the proved property (HP4) in Proposition 1, if N l 1 N l 2 N l 3 , there exists the relations of d w ( N l 1 , N l 2 ) d w ( N l 1 , N l 3 ) and d w ( N l 2 , N l 3 ) d w ( N l 1 , N l 3 ) for ρ > 0. Since the similarity measure is the complement of the distance measure, both S w ( N l 1 , N l 2 ) S w ( N l 1 , N l 3 ) and S w ( N l 2 , N l 3 ) S w ( N l 1 , N l 3 ) can be easily obtained. Therefore, the property (SP4) can hold. □

5. MADM Method Using the Similarity Measure of HLNNs

For a MADM problem in the HLNN setting, some DMs need to evaluate p alternatives (denoted by G = {g1, g2, …, gp}) over q attributes (denoted by S = {s1, s2, …, sq}) from the LT set H = {h0, h1, …, hτ}. Then, a weight vector W = (ω1, ω2, …, ωq), which is on the conditions of 0 ≤ ωj ≤ 1 (j = 1, 2, ..., q) and j = 1 q ω j = 1 , represents the importance of the attributes in S. Thus, the HLNN decision matrix M can be expressed as:
M = ( E l i ( s j ) ) p × q = g 1 g 2 g p [ E l 1 ( s 1 ) E l 1 ( s 2 ) E l 1 ( x q ) E l 2 ( s 1 ) E l 2 ( s 2 ) E l 2 ( s q ) E l p ( s 1 ) E l p ( s 2 ) E l p ( s q ) ] .
where E l i ( s j ) = { < h T i j 1 , h U i j 1 , h F i j 1 > , < h T i j 2 , h U i j 2 , h F i j 2 > , , < h T i j m i j , h U i j m i j , h F i j m i j > } is a HLNN for sj S, and mij is the number of LNNs in E l i ( s j ) (i = 1, 2, …, p and j = 1, 2, …, q).
On the basis of the proposed similarity measure, a novel MADM method of HLNN is presented by the following steps:
Step 1: For any HLNN E l i ( s j ) (j = 1, 2, …, q) in M, rank all elements ϑ i j σ ( k ) (k = 1, 2, …, mij) in each HLNN E l i ( s j ) (j = 1, 2, …, q) in an ascending order according to their score and accuracy functions, then yield the corresponding extended HLNN E l i o ( s j ) based on the LCMC cj and the occurrence number Rij of every LNN in E l i ( s j ) obtained by Equation (3). Hence, the extended decision matrix M ° is
M o = ( E l i o ( s j ) ) p × q = g 1 g 2 g p [ E l 1 o ( s 1 ) E l 1 o ( s 2 ) E l 1 o ( s q ) E l 2 o ( s 1 ) E l 2 o ( s 2 ) E l 2 o ( s q ) E l p o ( s 1 ) E l p o ( s 2 ) E l p o ( s q ) ] ,
where E l i o ( s j ) = { ϑ i j σ ( 1 ) , ϑ i j σ ( 2 ) , , ϑ i j σ ( c j ) } (i = 1, 2, …, p and j = 1, 2, …, q) satisfies ϑ i j σ ( k ) ϑ i j σ ( k + 1 ) (k = 1, 2, …, cj).
Step 2: Specify an ideal HLNN set as g * = { E l o ( s 1 ) , E l o ( s 2 ) , , E l o ( s q ) } = { { ϑ 1 σ ( 1 ) , ϑ 1 σ ( 2 ) , , ϑ 1 σ ( c 1 ) } , { ϑ 2 σ ( 1 ) , ϑ 2 σ ( 2 ) , , ϑ 2 σ ( c 2 ) } , , { ϑ q σ ( 1 ) , ϑ q σ ( 2 ) , , ϑ q σ ( c q ) } } for all ϑ j σ ( k ) = < h τ , h 0 , h 0 > (k = 1, 2, ..., cj and j = 1, 2, ..., q).
Hence, the similarity measure between gi (i = 1, 2, …, p) and g* can be calculated by
S w ( g i , g * ) = 1 d w ( g i , g * ) = 1 { j = 1 q w j [ 1 3 c j k = 1 c j ( | f ( h T a i j σ ( k ) ) f ( h τ ) | ρ + | f ( h U a i j σ ( k ) ) f ( h 0 ) | ρ + | f ( h F a i j σ ( k ) ) f ( h 0 ) | ρ ) ] } 1 / ρ     = 1 { j = 1 q w j [ 1 3 c j τ ρ k = 1 c j ( | T a i j σ ( k ) τ | ρ + | U a i j σ ( k ) | ρ + | F a i j σ ( k ) | ρ ) ] } 1 / ρ f o r   ρ > 0 .
Step 3: According to the similarity measure results, rank the alternatives in G = {g1, g2, , gm} in a descending order and choose the best one.
Step 4: End.
HLNN is a hybrid form of a LNN and HFS, which inherits the advantages of both the LNN and HFS, and expresses the decision-making information with a hesitant set of LNNs. The proposed LCMC-based distance and similarity measures can deal with not only the HLNN information, but also the LNN information, because the LNN is only a special case of the HLNN when the DMs have no hesitation; while all existing aggregation operators of LNNs [14] cannot aggregate HLNN information for the reason that the HLNN is a LNN set of any length. Furthermore, existing MADM methods cannot deal with decision-making problems in the HLNN setting.
Moreover, to ensure the objectivity of the measure calculational results, the proposed LCMC-based distance and similarity measures are based on the LCMC extension method in HLNNs rather than by simply adding special components, such as the maximum or the minimum or the average values, which heavily depend on the personal interests and preferences of DMs [23,24] so as to easily result in subjective decision-making results. Thus, the novel MADM method of HLNN provides a more general and objective decision-making process for decision-makers.

6. Actual Example

In this section, to verify whether the novel MADM approach with HLNNs is feasible and reasonable in practical applications, an investment decision-making case adapted from [14] is illustrated under a HLNN environment. In this case, the investment company makes an optimal selection in a set of four possible manufacturers, G = {g1, g2, g3, g4}, for producing computers (g1), cars (g2), food (g3), and clothing (g4), respectively. The four alternatives must satisfy a set of three attributes, S = {s1, s2, s3}, including the risk (s1), the growth (s2), and the environmental impact (s3), with the importance given by the weight vector W = (0.35, 0.25, 0.4). Now, some DMs are assigned to assess the alternatives over the attributes by HLNN expressions from the given LT set H = {h0: none, h1: lowest, h2: lower, h3: low, h4: moderate, h5: high, h6: higher, h7: highest, h8: perfect}. Then, the assessment results regarding the four alternatives g1, g2, g3, and g4 on the three attributes s1, s2, and s3 can be constructed as
M = g 1 g 2 g 3 g 4 [ { < h 6 , h 1 , h 2 > , < h 6 , h 1 , h 2 > , < h 7 , h 3 , h 4 > } { < h 7 , h 2 , h 1 > , < h 6 , h 1 , h 1 > , < h 7 , h 3 , h 3 > } { < h 6 , h 2 , h 2 > , < h 4 , h 2 , h 3 > } { < h 7 , h 1 , h 1 > , < h 7 , h 2 , h 3 > , < h 6 , h 3 , h 4 > } { < h 7 , h 3 , h 2 > , < h 6 , h 1 , h 1 > } { < h 7 , h 3 , h 2 > , < h 6 , h 1 , h 1 > } { < h 6 , h 2 , h 2 > , < h 5 , h 1 , h 2 > } { < h 7 , h 1 , h 1 > , < h 5 , h 1 , h 2 > } { < h 6 , h 2 , h 2 > , < h 5 , h 4 , h 2 > } { < h 7 , h 1 , h 2 > , < h 6 , h 1 , h 1 > , < h 7 , h 2 , h 3 > } { < h 7 , h 2 , h 3 > , < h 5 , h 1 , h 1 > } { < h 7 , h 2 , h 1 > , < h 5 , h 2 , h 3 > } ] .
Thus, there are the following decision steps:
Step 1: According to the score and accuracy functions obtained by Equations (1) and (2), rank the LNNs ϑ i j σ ( k ) (k = 1, 2, …, mij) in each HLNN E l i ( s j ) (i = 1, 2, 3, 4 and j = 1, 2, 3) in an ascending order, and obtain the following matrix:
M = g 1 g 2 g 3 g 4 [ { < h 7 , h 3 , h 4 > , < h 6 , h 1 , h 2 > , < h 6 , h 1 , h 2 > } { < h 7 , h 3 , h 3 > , < h 6 , h 1 , h 1 > , < h 7 , h 2 , h 1 > } { < h 4 , h 2 , h 3 > , < h 6 , h 2 , h 2 > } { < h 6 , h 3 , h 4 > , < h 7 , h 2 , h 3 > , < h 7 , h 1 , h 1 > } { < h 7 , h 3 , h 2 > , < h 6 , h 1 , h 1 > } { < h 4 , h 2 , h 3 > , < h 6 , h 2 , h 3 > , < h 7 , h 2 , h 1 > } { < h 5 , h 1 , h 2 > , < h 6 , h 2 , h 2 > } { < h 5 , h 1 , h 2 > , < h 7 , h 1 , h 1 > } { < h 5 , h 4 , h 2 > , < h 6 , h 2 , h 2 > } { < h 7 , h 2 , h 3 > , < h 7 , h 1 , h 2 > , < h 6 , h 1 , h 1 > } { < h 7 , h 2 , h 3 > , < h 5 , h 1 , h 1 > } { < h 5 , h 2 , h 3 > , < h 7 , h 2 , h 1 > } ]
Then, according to the LCMC cj = 6 (j = 1, 2, 3) and the number of occurrences of LNNs Rij of E l i ( s j ) (i = 1, 2, 3, 4 and j = 1, 2, 3) obtained by Equation (3), yield the following extended decision matrix M ° :
M o = g 1 g 2 g 3 g 4 [ { < h 7 , h 3 , h 4 > , < h 7 , h 3 , h 4 > , < h 6 , h 1 , h 2 > , < h 6 , h 1 , h 2 > , < h 6 , h 1 , h 2 > , < h 6 , h 1 , h 2 > } { < h 6 , h 3 , h 4 > , < h 6 , h 3 , h 4 > , < h 7 , h 2 , h 3 > , < h 7 , h 2 , h 3 > , < h 7 , h 1 , h 1 > , < h 7 , h 1 , h 1 > } { < h 5 , h 1 , h 2 > , < h 5 , h 1 , h 2 > , < h 5 , h 1 , h 2 > , < h 6 , h 2 , h 2 > , < h 6 , h 2 , h 2 > , < h 6 , h 2 , h 2 > } { < h 7 , h 2 , h 3 > , < h 7 , h 2 , h 3 > , < h 7 , h 1 , h 2 > , < h 7 , h 1 , h 2 > , < h 6 , h 1 , h 1 > , < h 6 , h 1 , h 1 > } { < h 7 , h 3 , h 3 > , < h 7 , h 3 , h 3 > , < h 6 , h 1 , h 1 > , < h 6 , h 1 , h 1 > , < h 7 , h 2 , h 1 > , < h 7 , h 2 , h 1 > } { < h 7 , h 3 , h 2 > , < h 7 , h 3 , h 2 > , < h 7 , h 3 , h 2 > , < h 6 , h 1 , h 1 > , < h 6 , h 1 , h 1 > , < h 6 , h 1 , h 1 > } { < h 5 , h 1 , h 2 > , < h 5 , h 1 , h 2 > , < h 5 , h 1 , h 2 > , < h 7 , h 1 , h 1 > , < h 7 , h 1 , h 1 > , < h 7 , h 1 , h 1 > } { < h 7 , h 2 , h 3 > , < h 7 , h 2 , h 3 > , < h 7 , h 2 , h 3 > , < h 5 , h 1 , h 1 > , < h 5 , h 1 , h 1 > , < h 5 , h 1 , h 1 > } { < h 4 , h 2 , h 3 > , < h 4 , h 2 , h 3 > , < h 4 , h 2 , h 3 > , < h 6 , h 2 , h 2 > , < h 6 , h 2 , h 2 > , < h 6 , h 2 , h 2 > } { < h 4 , h 2 , h 3 > , < h 4 , h 2 , h 3 > , < h 6 , h 2 , h 3 > , < h 6 , h 2 , h 3 > , < h 7 , h 2 , h 1 > , < h 7 , h 2 , h 1 > } { < h 5 , h 4 , h 2 > , < h 5 , h 4 , h 2 > , < h 5 , h 4 , h 2 > , < h 6 , h 2 , h 2 > , < h 6 , h 2 , h 2 > , < h 6 , h 2 , h 2 > } { < h 5 , h 2 , h 3 > , < h 5 , h 2 , h 3 > , < h 5 , h 2 , h 3 > , < h 7 , h 2 , h 1 > , < h 7 , h 2 , h 1 > , < h 7 , h 2 , h 1 > } ] .
Step 2: Obtain the similarity measures between the alternatives g1, g2, g3, and g4 and the ideal solution g* = {{<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}} by Equation (7) for ρ = 1 and 2:
S w ( g 1 , g * ) = 0.7354 , S w ( g 2 , g * ) = 0.7493 , S w ( g 3 , g * ) = 0.7406 , S w ( g 4 , g * ) = 0.7747   for   ρ = 1 . S w ( g 1 , g * ) = 0.7121 , S w ( g 2 , g * ) = 0.7224 , S w ( g 3 , g * ) = 0.7217 , S w ( g 4 , g * ) = 0.7525   for   ρ = 2 .
Step 3: Due to Sw(g4, g*) > Sw(g2, g*) > Sw(g3, g*) > Sw(g1, g*) for ρ = 1 and 2, the ranking of the four alternatives is g4 > g2 >g3 > g1; thus, the best choice is g4.
By following the above steps, the MADM calculations of ρ [3, 100] are further performed for this example. The relative decision results, including the similarity measure, ranking order, average value (AV), standard deviation (SD), and the best alternative, are shown in Table 1. Obviously, the ranking order is g4 > g2 > g3 > g1 for ρ = 1 and 2, and then it becomes g4 > g3 > g2 > g1 for ρ > 2; while the best alternative is always g4.

7. Discussion and Analysis

In this section, further discussion and analysis are carried out for the resolution and the sensitivity of the novel MADM method of HLNNs.

7.1. Resolution Analysis

According to Table 1, Figure 1 illustrates the SDs of the similarity measures for ρ [1, 100]. Clearly, the SD increases with increasing the value of ρ. Then, it reaches 0.051 for ρ = 100. Since the SD can reflect the resolution/discrimination level of the MADM method, it is obvious that the resolution/discrimination level will be enhanced with increasing the value of ρ so as to provide effective decision information for decision-makers in the MADM process. However, considering that the computational complexity of MADM increases with increasing the value of ρ, we recommend selecting the MADM method with some suitable value of ρ under the condition that the resolution degree meets some actual requirement and the DMs’ preference.

7.2. Sensitivity Analysis of Weights

The average weight vector of W = (1/3, 1/3, 1/3) is applied to the actual example as a comparison with W = (0.35, 0.25, 0.4) to illustrate the weight sensitivity of the MADM method. The decision results with W = (1/3, 1/3, 1/3) are shown in Table 2. Then, the similarity measure values for W = (0.35, 0.25, 0.4) and W = (1/3, 1/3, 1/3) are further illustrated in Figure 2a,b.
From Figure 2, obviously, the similarity measure curves with W = (0.35, 0.25, 0.4) are very similar to those with W = (1/3, 1/3, 1/3). By carefully comparing Table 1 and Table 2, we find that the ranking orders are identical except that of ρ = 2. For ρ = 2, the ranking orders of g4 > g2 > g3 > g1 for W = (0.35, 0.25, 0.4) and g4 > g3 > g2 > g1 for W = (1/3, 1/3, 1/3) indicate a little difference. Then, the best alternatives are the same within the entire range of ρ. Hence, the ranking orders in this example imply a little sensitivity to the attribute weights.

8. Conclusions

This paper firstly defined the concept of HLNNs by integrating a HFS with a LNN. Then, the normalized generalized distance and similarity measures of HLNNs were presented based on the LCMC method. Next, a novel MADM method based on the proposed similarity measure was presented under the HLNN environment. Finally, a MADM example of an investment problem was illustrated to demonstrate that the developed method is feasible and applicable. Since the HLNN combines the merits of the HFS and LNN, containing more information than the LNN, the MADM method of HLNNs based on the LCMC method is more objective and more suitable for the practical applications with HLNN information.
However, some advantages of the proposed HLNNs and MADM method based on the LCMC method are listed as follows:
(1)
The proposed HLNN provides a new effective way to express more decision information than existing LNNs by considering the hesitancy of DMs.
(2)
The proposed MADM method of HLNNs solves the MADM problems with HLNN information for the first time, as well as the gap of existing linguistic decision-making methods.
(3)
The proposed distance and similarity measures of HLNNs based on the LCMC extension method for HLNNs are more objective and more reasonable than those reported in [23,24].
Future research on HLNNs will focus on the development of new aggregation operators and correlation coefficients of HLNNs, and their applications in fault diagnosis, medical diagnosis, decision-making, and so on in the HLNN setting.

Author Contributions

J.Y. originally proposed HLNNs and their operations, and W.C. presented the MADM method of HLNNs and the calculation and comparative analysis of an actual example. Both coauthors wrote the paper together.

Funding

This paper was supported by the National Natural Science Foundation of China (No. 61703280).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning Part I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  2. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. A model of consensus in group decision making under linguistic assessments. Fuzzy Set Syst. 1996, 78, 73–87. [Google Scholar] [CrossRef]
  3. Herrera, F.; Herrera-Viedma, E. Linguistic decision analysis: Steps for solving decision problems under linguistic information. Fuzzy Set Syst. 2000, 115, 67–82. [Google Scholar] [CrossRef]
  4. Xu, Z.S. A method based on linguistic aggregation operators for group decision making with linguistic preference relations. Inf. Sci. 2004, 166, 19–30. [Google Scholar] [CrossRef]
  5. Xu, Z.S. Uncertain linguistic aggregation operators based approach to multiple attribute group decision making under uncertain linguistic environment. Inf. Sci. 2004, 168, 171–184. [Google Scholar] [CrossRef]
  6. Xu, Z.S. Induced uncertain linguistic OWA operators applied to group decision making. Inf. Sci. 2006, 7, 231–238. [Google Scholar] [CrossRef]
  7. Wei, G.W. Uncertain linguistic hybrid geometric mean operator and its application to group decision making under uncertain linguistic environment. Int. J. Uncertain. Fuzziness 2009, 17, 251–267. [Google Scholar] [CrossRef]
  8. Peng, B.; Ye, C.; Zeng, S. Uncertain pure linguistic hybrid harmonic averaging operator and generalized interval aggregation operator based approach to group decision making. Knowl.-Based Syst. 2012, 36, 175–181. [Google Scholar] [CrossRef]
  9. Chen, Z.C.; Liu, P.H.; Pei, Z. An approach to multiple attribute group decision making based on linguistic intuitionistic fuzzy numbers. Int. J. Comput. Intell. Syst. 2015, 8, 747–760. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, P.D.; Liu, J.L.; Merigó, J.M. Partitioned Heronian means based on linguistic intuitionistic fuzzy numbers for dealing with multi-attribute group decision making. Appl. Soft Comput. 2018, 62, 395–422. [Google Scholar] [CrossRef]
  11. Peng, X.; Liu, C. Algorithms for neutrosophic soft decision making based on EDAS, new similarity measure and level soft set. J. Intell. Fuzzy Syst. 2017, 32, 955–968. [Google Scholar] [CrossRef]
  12. Zavadskas, E.K.; Bausys, R.; Juodagalviene, B.; Garnyte-Sapranaviciene, I. Model for residential house element and material selection by neutrosophic MULTIMOORA method. Eng. Appl. Artif. Intell. 2017, 64, 315–324. [Google Scholar] [CrossRef]
  13. Bausys, R.; Juodagalviene, B. Garage location selection for residential house by WASPAS-SVNS method. J. Civ. Eng. Manag. 2017, 23, 421–429. [Google Scholar] [CrossRef]
  14. Fang, Z.B.; Ye, J. Multiple attribute group decision-making method based on linguistic neutrosophic numbers. Symmetry 2017, 9, 111. [Google Scholar] [CrossRef]
  15. Liu, P.D.; Mahmood, T.; Khan, Q. Group decision making based on power Heronian aggregation operators under linguistic neutrosophic environment. Int. J. Fuzzy Syst. 2018, 20, 970–985. [Google Scholar] [CrossRef]
  16. Torra, V.; Narukawa, Y. On hesitant fuzzy sets and decision. In Proceedings of the 18th IEEE International Conference on Fuzzy Systems, Jeju Island, Korea, 20–24 August 2009; pp. 1378–1382. [Google Scholar] [CrossRef]
  17. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  18. Rodríguez, R.M.; Martínez, L.; Herrera, F. Hesitant Fuzzy Linguistic Term Sets for Decision Making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef] [Green Version]
  19. Rodríguez, R.M.; MartíNez, L.; Herrera, F. A group decision making model dealing with comparative linguistic expressions based on hesitant fuzzy linguistic term sets. Inf. Sci. 2013, 241, 28–42. [Google Scholar] [CrossRef]
  20. Lin, R.; Zhao, X.F.; Wei, G.W. Models for selecting an ERP system with hesitant fuzzy linguistic information. J. Intell. Fuzzy Syst. 2014, 26, 2155–2165. [Google Scholar] [CrossRef]
  21. Wang, J.Q.; Wu, J.T.; Wang, J.; Zhang, H.; Chen, X. Interval-valued hesitant fuzzy linguistic sets and their applications in multi-criteria decision-making problems. Inf. Sci. 2014, 288, 55–72. [Google Scholar] [CrossRef]
  22. Ye, J. Multiple Attribute Decision-Making Methods Based on the Expected Value and the Similarity Measure of Hesitant Neutrosophic Linguistic Numbers. Cogn. Comput. 2018, 10, 454–463. [Google Scholar] [CrossRef]
  23. Zhu, B.; Xu, Z. Consistency Measures for Hesitant Fuzzy Linguistic Preference Relations. IEEE Trans. Fuzzy Syst. 2014, 22, 35–45. [Google Scholar] [CrossRef]
  24. Liao, H.; Xu, Z.; Zeng, X.J.; Merigó, J.M. Qualitative decision making with correlation coefficients of hesitant fuzzy linguistic term sets. Knowl.-Based Syst. 2015, 76, 127–138. [Google Scholar] [CrossRef]
  25. Ye, J. Multiple-attribute Decision-Making Method under a Single-Valued Neutrosophic Hesitant Fuzzy Environment. J. Intell. Syst. 2014, 24, 23–36. [Google Scholar] [CrossRef]
Figure 1. SD of similarity measure values for ρ [1, 100] and W = (0.35, 0.25, 0.4).
Figure 1. SD of similarity measure values for ρ [1, 100] and W = (0.35, 0.25, 0.4).
Symmetry 10 00330 g001
Figure 2. Similarity measure values of four alternatives for ρ [1, 100]. (a) W = (0.35, 0.25, 0.4) and (b) W = (1/3, 1/3, 1/3).
Figure 2. Similarity measure values of four alternatives for ρ [1, 100]. (a) W = (0.35, 0.25, 0.4) and (b) W = (1/3, 1/3, 1/3).
Symmetry 10 00330 g002
Table 1. Decision results of the proposed multiple-attribute decision-making (MADM) method for ρ [1, 100] and W = (0.35, 0.25, 0.4).
Table 1. Decision results of the proposed multiple-attribute decision-making (MADM) method for ρ [1, 100] and W = (0.35, 0.25, 0.4).
ρ1Sw(g1, g*), Sw(g2, g*), Sw(g3, g*), Sw(g4, g*) 2Ranking OrderAV 3SD 4Best Alternative
10.7354, 0.7493, 0.7406, 0.7747g4 > g2 > g3 > g10.75000.0151g4
20.7121, 0.7224, 0.7217, 0.7525g4 > g2 > g3 > g10.72720.0152g4
30.6905, 0.6985, 0.7037, 0.7335g4 > g3 > g2 > g10.70660.0163g4
40.6710, 0.6781, 0.6867, 0.7182g4 > g3 > g2 > g10.68850.018g4
50.6539, 0.6608, 0.6710, 0.7061g4 > g3 > g2 > g10.67300.0201g4
100.5972, 0.6047, 0.6133, 0.6722g4 > g3 > g2 > g10.62190.0296g4
150.5690, 0.5754, 0.5817, 0.6575g4 > g3 > g2 > g10.59590.0358g4
200.5531, 0.5582, 0.5631, 0.6497g4 > g3 > g2 > g10.58100.0398g4
300.5361, 0.5397, 0.5432, 0.6417g4 > g3 > g2 > g10.56520.0443g4
400.5273, 0.5301, 0.5327, 0.6376g4 > g3 > g2 > g10.55690.0466g4
500.5220, 0.5242, 0.5264, 0.6351g4 > g3 > g2 > g10.55190.048g4
1000.5111, 0.5123, 0.5134, 0.6301g4 > g3 > g2 > g10.54170.051g4
Notes: 1 ρ: parameter; 2 Sw(gi, g*): the similarity measures between the alternatives gi(i = 1, 2, 3, 4) and the ideal solution g* = {{<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}}; 3 AV: average value; 4 SD: standard deviation.
Table 2. Decision results of the proposed MADM method for ρ [1, 100] and W = (1/3, 1/3, 1/3).
Table 2. Decision results of the proposed MADM method for ρ [1, 100] and W = (1/3, 1/3, 1/3).
ρSw(g1, g*), Sw(g2, g*), Sw(g3, g*), Sw(g4, g*)Ranking OrderAVSDBest Alternative
10.7431, 0.7546, 0.7500, 0.7755g4 > g2 > g3 > g10.75580.0121g4
20.7189, 0.7278, 0.7300, 0.7529g4 > g3 > g2 > g10.73240.0125g4
30.6968, 0.7039, 0.7112, 0.7336g4 > g3 > g2 > g10.71140.0138g4
40.6769, 0.6833, 0.6938, 0.7180g4 > g3 > g2 > g10.6930.0156g4
50.6596, 0.6659, 0.6779, 0.7057g4 > g3 > g2 > g10.67730.0177g4
100.6019, 0.6088, 0.6193, 0.6717g4 > g3 > g2 > g10.62540.0274g4
150.5727, 0.5786, 0.5865, 0.6572g4 > g3 > g2 > g10.59880.0341g4
200.556, 0.5608, 0.5671, 0.6495g4 > g3 > g2 > g10.58340.0384g4
300.5381, 0.5415, 0.5459, 0.6415g4 > g3 > g2 > g10.56680.0432g4
400.5289, 0.5315, 0.5349, 0.6374g4 > g3 > g2 > g10.55820.0458g4
500.5232, 0.5254, 0.5281, 0.6350g4 > g3 > g2 > g10.55290.0474g4
1000.5118, 0.5128, 0.5142, 0.6300g4 > g3 > g2 > g10.54220.0507g4

Share and Cite

MDPI and ACS Style

Cui, W.; Ye, J. Multiple-Attribute Decision-Making Method Using Similarity Measures of Hesitant Linguistic Neutrosophic Numbers Regarding Least Common Multiple Cardinality. Symmetry 2018, 10, 330. https://doi.org/10.3390/sym10080330

AMA Style

Cui W, Ye J. Multiple-Attribute Decision-Making Method Using Similarity Measures of Hesitant Linguistic Neutrosophic Numbers Regarding Least Common Multiple Cardinality. Symmetry. 2018; 10(8):330. https://doi.org/10.3390/sym10080330

Chicago/Turabian Style

Cui, Wenhua, and Jun Ye. 2018. "Multiple-Attribute Decision-Making Method Using Similarity Measures of Hesitant Linguistic Neutrosophic Numbers Regarding Least Common Multiple Cardinality" Symmetry 10, no. 8: 330. https://doi.org/10.3390/sym10080330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop