**Step 4:** End.

HLNN is a hybrid form of a LNN and HFS, which inherits the advantages of both the LNN and HFS, and expresses the decision-making information with a hesitant set of LNNs. The proposed LCMC-based distance and similarity measures can deal with not only the HLNN information, but also the LNN information, because the LNN is only a special case of the HLNN when the DMs have no hesitation; while all existing aggregation operators of LNNs [14] cannot aggregate HLNN information for the reason that the HLNN is a LNN set of any length. Furthermore, existing MADM methods cannot deal with decision-making problems in the HLNN setting.

Moreover, to ensure the objectivity of the measure calculational results, the proposed LCMC-based distance and similarity measures are based on the LCMC extension method in HLNNs rather than by simply adding special components, such as the maximum or the minimum or the average values, which heavily depend on the personal interests and preferences of DMs [23,24] so as to easily result in subjective decision-making results. Thus, the novel MADM method of HLNN provides a more general and objective decision-making process for decision-makers.

## **6. Actual Example**

In this section, to verify whether the novel MADM approach with HLNNs is feasible and reasonable in practical applications, an investment decision-making case adapted from [14] is illustrated under a HLNN environment. In this case, the investment company makes an optimal selection in a set of four possible manufacturers, *G =* {*g*1, *g*2, *g*3, *g*4}, for producing computers (*g*1), cars (*g*2), food (*g*3), and clothing (*g*4), respectively. The four alternatives must satisfy a set of three attributes, *S =* {*<sup>s</sup>*1, *s*2, *s*3}, including the risk (*s*1), the growth (*s*2), and the environmental impact (*s*3), with the importance given by the weight vector *W* = (0.35, 0.25, 0.4). Now, some DMs are assigned to assess the alternatives over the attributes by HLNN expressions from the given LT set *H* = {*h*0: *none*, *h*1*: lowest*, *h*2*: lower*, *h*3*: low*, *h*4*: moderate*, *h*5*: high*, *h*6*: higher*, *h*7*: highest*, *h*8*: perfect*}. Then, the assessment results regarding the four alternatives *g*1, *g*2, *g*3, and *g*4 on the three attributes *s*1, *s*2, and *s*3 can be constructed as

*M* = *g*1 *g*2 *g*3 *g*4 ⎡ ⎢ ⎢ ⎢ ⎣ {< *h*6, *h*1, *h*2 >, < *h*6, *h*1, *h*2 >, < *h*7, *h*3, *h*4 >} {< *h*7, *h*2, *h*1 >, < *h*6, *h*1, *h*1 >, < *h*7, *h*3, *h*3 >} {< *h*6, *h*2, *h*2 >, < *h*4, *h*2, *h*3 >} {< *h*7, *h*1, *h*1 >, < *h*7, *h*2, *h*3 >, < *h*6, *h*3, *h*4 >} {< *h*7, *h*3, *h*2 >, < *h*6, *h*1, *h*1 >} {< *h*7, *h*3, *h*2 >, < *h*6, *h*1, *h*1 >} {< *h*6, *h*2, *h*2 >, < *h*5, *h*1, *h*2 >} {< *h*7, *h*1, *h*1 >, < *h*5, *h*1, *h*2 >} {< *h*6, *h*2, *h*2 >, < *h*5, *h*4, *h*2 >} {< *h*7, *h*1, *h*2 >, < *h*6, *h*1, *h*1 >, < *h*7, *h*2, *h*3 >} {< *h*7, *h*2, *h*3 >, < *h*5, *h*1, *h*1 >} {< *h*7, *h*2, *h*1 >, < *h*5, *h*2, *h*3 >} ⎤ ⎥ ⎥ ⎥ ⎦ .

Thus, there are the following decision steps:

**Step 1**: According to the score and accuracy functions obtained by Equations (1) and (2), rank the LNNs *ϑ<sup>σ</sup>*(*k*) *ij* (*k* = 1, 2, ... , *mij*) in each HLNN *Eli* (*sj*) (*i* = 1, 2, 3, 4 and *j* = 1, 2, 3) in an ascending order, and obtain the following matrix:

*M* = *g*1 *g*2 *g*3 *g*4 ⎡⎢ ⎢ ⎢ ⎣ {< *h*7, *h*3, *h*4 >, < *h*6, *h*1, *h*2 >, < *h*6, *h*1, *h*2 >} {< *h*7, *h*3, *h*3 >, < *h*6, *h*1, *h*1 >, < *h*7, *h*2, *h*1 >} {< *h*4, *h*2, *h*3 >, < *h*6, *h*2, *h*2 >} {< *h*6, *h*3, *h*4 >, < *h*7, *h*2, *h*3 >, < *h*7, *h*1, *h*1 >} {< *h*7, *h*3, *h*2 >, < *h*6, *h*1, *h*1 >} {< *h*4, *h*2, *h*3 >, < *h*6, *h*2, *h*3 >, < *h*7, *h*2, *h*1 >} {< *h*5, *h*1, *h*2 >, < *h*6, *h*2, *h*2 >} {< *h*5, *h*1, *h*2 >, < *h*7, *h*1, *h*1 >} {< *h*5, *h*4, *h*2 >, < *h*6, *h*2, *h*2 >} {< *h*7, *h*2, *h*3 >, < *h*7, *h*1, *h*2 >, < *h*6, *h*1, *h*1 >} {< *h*7, *h*2, *h*3 >, < *h*5, *h*1, *h*1 >} {< *h*5, *h*2, *h*3 >, < *h*7, *h*2, *h*1 >} ⎤⎥ ⎥ ⎥ ⎦

Then, according to the LCMC *cj =* 6 (*j =* 1, 2, 3) and the number of occurrences of LNNs *Rij* of *Eli*(*sj*) (*i =* 1, 2, 3, 4 and *j =* 1, 2, 3) obtained by Equation (3), yield the following extended decision matrix *M*◦ :

*Mo* = *g*1 *g*2 *g*3 *g*4 ⎡⎢⎢⎢⎣ {< *h*7, *h*3, *h*4 >, < *h*7, *h*3, *h*4 >, < *h*6, *h*1, *h*2 >, < *h*6, *h*1, *h*2 >, < *h*6, *h*1, *h*2 >, < *h*6, *h*1, *h*2 >} {< *h*6, *h*3, *h*4 >, < *h*6, *h*3, *h*4 >, < *h*7, *h*2, *h*3 >, < *h*7, *h*2, *h*3 >, < *h*7, *h*1, *h*1 >, < *h*7, *h*1, *h*1 >} {< *h*5, *h*1, *h*2 >, < *h*5, *h*1, *h*2 >, < *h*5, *h*1, *h*2 >, < *h*6, *h*2, *h*2 >, < *h*6, *h*2, *h*2 >, < *h*6, *h*2, *h*2 >} {< *h*7, *h*2, *h*3 >, < *h*7, *h*2, *h*3 >, < *h*7, *h*1, *h*2 >, < *h*7, *h*1, *h*2 >, < *h*6, *h*1, *h*1 >, < *h*6, *h*1, *h*1 >} {< *h*7, *h*3, *h*3 >, < *h*7, *h*3, *h*3 >, < *h*6, *h*1, *h*1 >, < *h*6, *h*1, *h*1 >, < *h*7, *h*2, *h*1 >, < *h*7, *h*2, *h*1 >} {< *h*7, *h*3, *h*2 >, < *h*7, *h*3, *h*2 >, < *h*7, *h*3, *h*2 >, < *h*6, *h*1, *h*1 >, < *h*6, *h*1, *h*1 >, < *h*6, *h*1, *h*1 >} {< *h*5, *h*1, *h*2 >, < *h*5, *h*1, *h*2 >, < *h*5, *h*1, *h*2 >, < *h*7, *h*1, *h*1 >, < *h*7, *h*1, *h*1 >, < *h*7, *h*1, *h*1 >} {< *h*7, *h*2, *h*3 >, < *h*7, *h*2, *h*3 >, < *h*7, *h*2, *h*3 >, < *h*5, *h*1, *h*1 >, < *h*5, *h*1, *h*1 >, < *h*5, *h*1, *h*1 >} {< *h*4, *h*2, *h*3 >, < *h*4, *h*2, *h*3 >, < *h*4, *h*2, *h*3 >, < *h*6, *h*2, *h*2 >, < *h*6, *h*2, *h*2 >, < *h*6, *h*2, *h*2 >} {< *h*4, *h*2, *h*3 >, < *h*4, *h*2, *h*3 >, < *h*6, *h*2, *h*3 >, < *h*6, *h*2, *h*3 >, < *h*7, *h*2, *h*1 >, < *h*7, *h*2, *h*1 >} {< *h*5, *h*4, *h*2 >, < *h*5, *h*4, *h*2 >, < *h*5, *h*4, *h*2 >, < *h*6, *h*2, *h*2 >, < *h*6, *h*2, *h*2 >, < *h*6, *h*2, *h*2 >} {< *h*5, *h*2, *h*3 >, < *h*5, *h*2, *h*3 >, < *h*5, *h*2, *h*3 >, < *h*7, *h*2, *h*1 >, < *h*7, *h*2, *h*1 >, < *h*7, *h*2, *h*1 >} ⎤ ⎥⎥⎥⎦.

**Step 2**: Obtain the similarity measures between the alternatives *g*1, *g*2, *g*3, and *g*4 and the ideal solution *g\** = {{<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}} by Equation (7) for *ρ* = 1 and 2:

$$\mathcal{S}\_{\text{w}}(\text{g}\_{1},\text{g}^{\*}) = 0.7354, \mathcal{S}\_{\text{w}}(\text{g}\_{2},\text{g}^{\*}) = 0.7493, \mathcal{S}\_{\text{w}}(\text{g}\_{3},\text{g}^{\*}) = 0.7406, \mathcal{S}\_{\text{w}}(\text{g}\_{4},\text{g}^{\*}) = 0.7747 \text{ for } \rho = 1.1 \text{ and } \mathcal{S}\_{\text{w}}(\text{g}\_{1},\text{g}^{\*}) = 0.7217, \mathcal{S}\_{\text{w}}(\text{g}\_{4},\text{g}^{\*}) = 0.7525 \text{ for } \rho = 2.1$$

**Step 3**: Due to *Sw(g*4, *g\*) > Sw(g*2, *g\*) > Sw(g*3, *g\*) > Sw(g*1, *g\**) for *ρ =* 1 and 2, the ranking of the four alternatives is *g*4 *> g*2 *>g*3 *> g*1; thus, the best choice is *g*4.

By following the above steps, the MADM calculations of *ρ* ∈ [3, 100] are further performed for this example. The relative decision results, including the similarity measure, ranking order, average value (AV), standard deviation (SD), and the best alternative, are shown in Table 1. Obviously, the ranking order is *g*4 > *g*2 > *g*3 > *g*1 for *ρ =* 1 and 2, and then it becomes *g*4 > *g*3 > *g*2 > *g*1 for *ρ* > 2; while the best alternative is always *g*4.

**Table 1.** Decision results of the proposed multiple-attribute decision-making (MADM) method for *ρ* ∈ [1, 100] and *W* = (0.35, 0.25, 0.4).


Notes: 1 *ρ*: parameter; 2 *Sw*(*gi*, *g\**): the similarity measures between the alternatives *gi*(*i* = 1, 2, 3, 4) and the ideal solution *g\** = {{<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}, {<8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>, <8,0,0>}}; 3 AV: average value; 4 SD: standard deviation.

#### **7. Discussion and Analysis**

In this section, further discussion and analysis are carried out for the resolution and the sensitivity of the novel MADM method of HLNNs.

## *7.1. Resolution Analysis*

According to Table 1, Figure 1 illustrates the SDs of the similarity measures for *ρ* ∈ [1, 100]. Clearly, the SD increases with increasing the value of *ρ*. Then, it reaches 0.051 for *ρ* = 100. Since the SD can reflect the resolution/discrimination level of the MADM method, it is obvious that the resolution/discrimination level will be enhanced with increasing the value of *ρ* so as to provide effective decision information for decision-makers in the MADM process. However, considering that the computational complexity of MADM increases with increasing the value of *ρ*, we recommend selecting the MADM method with some suitable value of *ρ* under the condition that the resolution degree meets some actual requirement and the DMs' preference.

**Figure 1.** SD of similarity measure values for *ρ*∈ [1, 100] and *W* = (0.35, 0.25, 0.4).

#### *7.2. Sensitivity Analysis of Weights*

The average weight vector of *W* = (1/3, 1/3, 1/3) is applied to the actual example as a comparison with *W* = (0.35, 0.25, 0.4) to illustrate the weight sensitivity of the MADM method. The decision results with *W* = (1/3, 1/3, 1/3) are shown in Table 2. Then, the similarity measure values for *W* = (0.35, 0.25, 0.4) and *W* = (1/3, 1/3, 1/3) are further illustrated in Figure 2a,b.

From Figure 2, obviously, the similarity measure curves with *W* = (0.35, 0.25, 0.4) are very similar to those with *W* = (1/3, 1/3, 1/3). By carefully comparing Tables 1 and 2, we find that the ranking orders are identical except that of *ρ* = 2. For *ρ* = 2, the ranking orders of *g*4 > *g*2 > *g*3 > *g*1 for *W* = (0.35, 0.25, 0.4) and *g*4 > *g*3 > *g*2 > *g*1 for *W* = (1/3, 1/3, 1/3) indicate a little difference. Then, the best alternatives are the same within the entire range of *ρ*. Hence, the ranking orders in this example imply a little sensitivity to the attribute weights.


**Table 2.** Decision results of the proposed MADM method for *ρ* ∈ [1, 100] and *W* = (1/3, 1/3, 1/3).

**Figure 2.** Similarity measure values of four alternatives for *ρ*∈ [1, 100]. (**a**) *W* = (0.35, 0.25, 0.4) and (**b**) *W* = (1/3, 1/3, 1/3).
